Final Project Documentation

For our physical computing final project, Aileen and I decided to team up and work together on creating a boxing training tool. The goal was to place appropriate sensors inside a boxing mitt such that it could calculate and send specific data about the hits to a computer for a user to refer to after a training session. Additionally, we wanted the entire system to be wireless for maximum effectivity and usability as a legitimate training tool.

To begin with, we researched boxing tools and spoke with both people who train and trainers. The user research revealed that the most important metric or feedback that users wanted to received was the type of hit and the number of good hits completed during a specific training period. This guided us in the direction of creating two key design features - the ability to distinguish between the different types of boxing hits, and a hit counter. 

There are three distinct types of hits in boxing — jab, hook, and uppercut — and each hit is divided into a category for the left and right hands. The type of hit is differentiated by the position that the mitt is held; for a jab the mitts are held straight forwards and perpendicular to the ground, for a hook the mitts are turned inward to face one another, and for an uppercut the mitts are held facing down. In terms of appropriate sensors, we determined that an accelerometer and potentially a gyroscope would allow for us to determine the relative position of the mitts based on the x, y, and z axes and write if statements in our code to determine what kind of hit was thrown. In order to determine accuracy, we decided to use a medium sized FSR sensor that we would place at the mitt’s “sweet spot” where an ideal hit would make contact with the glove. To provide some sort of calculation for the “intensity” of the hit, we figured that we’d use the tilt of the mitt at the point of contact to see how hard the user had hit it. 

After choosing our sensors, we began the process of data collection to figure out the values we needed to write into the code to provide useful and conclusive results. This involved connecting our sensors to an Arduino and writing simple code that allowed us to view the output of the sensors on the serial monitor. First, we wanted to create the distinctions between the three types of hits. We realized quickly that it was fairly simple to distinguish an uppercut from either a hook or jab, as there were sharp changes to the y axis of the accelerometer when the mitt was facing forward versus facing the ground. 

Separating a jab from a hook, however, proved to be much more difficult. We did notice that there were slight changes in the x and z axes when switching from a jab to a hook, but the changes were not significant enough to tell the difference accurately. We also had to factor in human behavior via user testing at this point. When the mitts were turned in a very mechanical manner we could study and find consistencies in the changes in the axes, but we quickly realized that users do not behave so mechanically and we could not rely on an actual person using the tool in such a robotic manner. At this point we tried using an accelerometer with a gyroscope for added input that could be used to determine the angle of the mitts, but this also proved to be unsuccessful. The readings that the gyroscope provided were all relative rather than absolute, where the output reading was completely dependent on the position of the mitts when the reading began. This made it extremely difficult to determine the position and angle to categorize hit types, let alone write code that factored in the angle. After many hours of studying data and trying to make distinctions, we decided to leave the tool at separating jabs from uppercuts for this iteration. 

The next (and greatest) challenge of the process was making the system wireless by allowing the mitt to connect to a p5.js sketch on the browser via bluetooth. To make the system wireless and bluetooth compatible, we used an Arduino MKR1010. We received a lot of help from Tom Igoe on the javascript bluetooth code, the Arduino bluetooth code, and the p5.js sketch. We started out using the base code provided by Tom to get each individual sensor to connect via bluetooth individually. I learned a lot about how bluetooth works throughout this entire process, from creating a service for the device, declaring the characteristic of each service (or sensor value), assigning a UUID to each characteristic, and finally accessing and transforming the raw data values in a p5 sketch to display the appropriate information. Getting the Arduino and sensors to work properly involved a lot of trial and error, as well as logging values at each step of the code to understand what was happening and where information was being gathered and processed. 

Once we had a single mitt connected and working we worked on mapping the ranges of data and creating increments to give the user the metrics that we had determined at the outset. This involved writing a lot of if statements to isolate specific types of hits and positions. After we got one of the mitt’s working successfully, we had to figure out how to create a second instance of Arduino input in the bluetooth code. This required giving each device a separate name in the Arduino code, and creating two separate objects in the javascript code to account for a left mitt versus a right mitt. 

As far as fabrication goes, we decided to place our sensors and Arduino in an actual boxing mitt, for a variety of reasons. Firstly, in prototyping an actual tool for boxers, we wanted the tool to be as usable and legitimate as possible, and found that using a real mitt would accomplish this feat best. Additionally, because the tool has a very specific use case, we wanted users to be familiar with the tool and easily understand how they’re supposed to interact with it, which is much easier to achieve when using an actual mitt. We wired all of our sensors, the Arduino, and the battery to a perf-board that we stored in a pocket at the back of the mitt. The FSR sensor was place under the lining at the front of the glove to make it as inconspicuous as possible.

Final Project Concept

As far as the final project goes, I have a few vague ideas/concepts that I think I’d like to work with. I really enjoyed the parts of our midterm project that involved image capture and manipulation with physical components, and might want to explore those ideas in a very different context. In another class this semester I’ve spent a lot of time thinking about self-perception and self-representation in digital spaces, and I think using image manipulation may be an effective way to explore this space, or allow a user to explore these ideas.

Another idea that I’ve just started experimenting with a little bit in some of my ICM work is generative text. Again, I know this is very vague and high level at this point, but I’m thinking of ideas related to the nuance physical/body language and somehow using the subtexts of human interaction and communication to create an actual piece of text or literature. Within text/literature, I’ve specifically always been interested in humor, poetry, and human subconscious, and I’m starting to think of ways that I can access or manifest these areas through physical computing.

Midterm - Ghost Mirror Photo Booth

The idea for creating our “spooky” halloween Photo Booth was the result of combining elements of ideas that Ada and I had come up with independently, prior to our first meeting. I love all things related to photography, because of which I knew I wanted to involve some of the p5.js live capture capabilities in our project if possible. A Photo Booth seemed like a fairly straightforward and achievable task within the scope of the assignment, and I also figured that the bells and whistles components of a Photo Booth could easily be taken advantage of to work towards the halloween theme. When I met up with Ada, she mentioned that she had been thinking about the concept of a mood ring, and how it reacts to body temperature. Through the course of our conversation we realized that the two ideas could be combined, such that a wearable item could be the piece of the project that triggers a photo being taken in the booth. 


Considering the halloween theme we decided upon a hat for a few reasons; they are generally one-size-fits-all, it would be easy to hide technical pieces of our project in the tip of a witch’s hat, and there is typically only one way/place to wear a hat (fewer confounding variables in terms of interaction design). 


Our next consideration after deciding on the physical parts of the booth was the type of input/output we needed, and which sensors would be most conducive in terms of the physical pieces as well as the type/range of readings we wanted to get. At the outset, it seemed to make sense that the trigger for taking a photo would come from digital input, so that there are two states of either taking a photo or remaining static. In terms of analog input/output, we thought we could use an analog sensor to randomize some of the spooky features that appear on the photo that is captured.

We began thinking of some of the conventions associated with Photo Booths. The hat worked as the prop element that many photo booths have, however we wanted more and “spookier” features in order to truly embrace the halloween theme. At first, and definitely inspired by Harry Potter, we thought of placing a speaker somewhere in the hat, so that the user would hear creepy messages, and hopefully react in way that we could capture in the photo that was taken. Eventually, however, we realized that it would be difficult to predict where to put the microphone so that the user could hear the message properly, so we decided instead to print the spooky fortune on the screen after the photo was taken. This way there would be a record of the message and also the message/fortune would be a surprise for the user. 


Somewhere along the line we thought it would be a scary touch to also have a ghost/scary character appear on the screen and in the picture with the user. We also discovered image filters in researching the documentation for p5.js’s image capabilities, and thought that the “invert” filter was most appropriate for a halloween theme, as it gives everything a sort of “skeletal”/x-ray appearance. Before attempting to connect our sketch to our sensors, we implemented buttons to take and save the picture while we made decisions about the aesthetics of our photo booth images. 

In terms of our sensors, we decided to connect the motion/position of the ghost to an accelerometer which would be placed in the tip of the hat. It was a littler harder to figure out what kind of sensor to use to trigger the countdown for taking the picture. At first we considered using a light sensor, where when the input is LOW - meaning that the user was wearing the hat - the countdown would be triggered. We quickly realized that this method would lead to many situations in which the countdown could be triggered unnecessarily, like when the hat was set down. Eventually, we decided that a capacitance switch would be the most effective way to trigger the countdown, as it could limit input to making contact with the rim of the hat. In order to get the analog input to behave in a “binary” manner, we tested the input readings of the capacitance switch through the serial monitor, and then wrote if statements in our code that would trigger the switch after a certain threshold was reached in the input. 

In terms of preliminary wiring of the physical elements, we worked off of a breadboard, and used a test p5.js sketch to see if all the components were wired correctly and working. We definitely struggled with getting the accelerometer connections secure on the breadboard, which indicated to us early on that we’d have to be pretty meticulous when soldering it for the final product. 

After getting our wired components working with the test sketch, we finally began to connect it to our ghost mirror sketch. It was a lot tricker than it seemed at the outset to replace the functionality of the save and reset buttons with input from the capacitance switch. First of all, we realized that we would need to include an extra state change variable, otherwise the switch would not reset after it was released from contact. This was especially tricky because we already had a lot of if statements going on, so our process involved using console.log() within each layer of the code to see where the values were changing, and which range they fell within. Through our testing, we also realized that a countdown would be necessary, so that the user could have a few moments after putting the hat on to prepare themselves and pose for the photo. 

Once all of the hardware pieces were working appropriately with the p5.js sketch, we were ready for the final steps of assembling the materials of the photo booth and user testing. We spent a lot of time and went through three attempts before we could properly wire the accelerometer. Our first attempt was with an ethernet cable, which we thought would be most convenient since all of the small wires we’d need were contained nicely by the outside tubing. We connected one end of the wires to the accelerometer, and then soldered header pins to the other end to connect to the breadboard. In soldering/connecting, however, a lot of the smaller stranded wires kept breaking off, and we were never able to get the circuit to behave properly.

The next attempt used stranded core wires, where we used a clamp and a drill from the shop to twist the wires neatly. Here we ran into the exact same connectivity issues as we did with the ethernet cable, where the small/thin stranded wires kept breaking off and opening the circuit. Finally, we decided to use some of the regular wires from the shop, using the same drill and clamp process to twist the wires into one neat strand. These wires were not as flexible as the previous two, but we finally were able to achieve proper connectivity so we decided to use them for the final product.


Getting the capacitance switch to work and have connectivity all throughout the rim of the hat was also quite difficult. We gathered various conductive materials to use, like copper tape, a conductive fabric trimming, regular wires, and tin foil. We tried a lot of different methods, like trying to wire all around the rim of the hat, placing copper tape coming out from the rim like spokes of a wheel, combining copper tape and the conductive fabric, etc. After trying the hat out on our own heads as well as grabbing a few friends to test it out, it became apparent that it would be hard to guarantee that the capacitance readings would change a lot by touching someone’s hair. Placing the hat lightly on someone’s head gave a very different range readings than pressing a wire with our fingers. 

It became apparent that we would need to figure out a way for part of our circuit to make contact with the user’s forehead, as touching the skin was much more sensitive and reliable than touching hair. Also, the shape, volume, and texture of user’s hair varied a lot more than the properties of their skin, so it would be a lot easier for the hat to be functional for a greater range of users if we used tried to make contact with skin instead of hair. Therefore, we assembled the hat such that the wire connecting to the arduino and breadboard would becoming out of the back, and on the opposite site of the rim, we attached a large square of tin foil that was wrapped around a wire that served as the capacitance switch. This method finally worked, and we got accurate readings after testing it out on a few different people. 

Serial Communication

I followed along with the labs for this week, and was able to successfully connect my arduino to a p5.js sketch. To begin with, I created a graph to illustrate my potentiometer values as I turned it, as shown in the image below:

Screen Shot 2018-10-16 at 11.25.29 AM.png

Next, I wanted to take a p5.js sketch that I had already created, and use serial communication to use a push button or frs sensor from my breadboard to replace the mouse-pressed functionality of the original sketch. In the original piece, every time the user clicks the mouse, a new triangle appear on the canvas. My intention was to first see if I could get a pushbutton to create new triangles instead of the mouse, and then use my frs sensor and see if I could map the range of values from the frs sensor to the size of the triangles, such that different analog input could create larger or smaller triangles.

Digital Input & Output

As I continue to tinker with creating circuits, the mechanics are beginning to make a lot more sense. I found that incorporating the Arduino helped my understanding because I got a better idea of input and output by writing out the code and thinking about my desired results.

I followed the example of the class lab to begin with, hooking 2 LEDs up to a switch. I played around a little with variations in LED output, at first keeping one always set to HIGH and then later alternating the LED output for when input was HIGH.

Next I incorporated resistors that take in different kinds of input, like a photo transistor light sensor, and a force-sensitive resistor. Again, I experimented with various combinations of input and output set to HIGH/LOW.

For my creative application, I used my photo transistor light sensor to make an LED inside my PComp toolbox light-up when I opened the box (because the inside of my box is soooooo dark and a 2.4V LED is sooooo bright) . The concept behind this is that when the box is closed it is not being used, and it is dark inside. When the box is opened, it is in use and any natural light in the environment will be cast upon the inside. I wrote the code such that when digital input is LOW the output will also be LOW, and when digital input is HIGH digital output is also HIGH.

And it worked! This application is appropriate for my overall idea, but I think ideally or in a different environment a proximity sensor might work better conceptually. I was working in the day, near a window, so when I open the box there’s a good bit of natural light that the light sensor could pick up on in order to illuminate the LED. But, considering that lighting the box up is probably even more useful in a completely dark environment, this might not be the most practical application. You probably don’t need an LED light if you have enough natural light for the sensor to pick up on. In the future, I could use a proximity sensor instead, such that when my hands go near/into the box the LED lights up so that I can see what I’m reaching for even in a pitch black environment.

Observation

The interactive technology I chose to observe in the ~wild~ this week were subway doors. I am infinitely fascinated by anything and everything related to the subway. One of the best things about the riding the subway is paying attention to how people interact with the imminent closing of the doors. Comic relief aside, observing the subway doors from the perspective of interaction design and human behavior reveals a LOT about what people expect from technology and how they figure out how to get what they want when the tech falls short of such expectations.

One of my most notable observations was regarding the wide variety of ways in which subway-riders react to the closing doors. Some will notice or predict the doors beginning to close (either by seeing people exit or hearing an announcement) and slow their pace even if they are within reachable distance to the subway car. Others, however, will bolt all the way from the stairs leading to the platform in an attempt to get onto the train. Most of these people fail, but I noticed that they like to get creative. A good number of people took advantage of items they were holding — backpacks, umbrellas, babies(just kidding!) — and launched them forward beyond their own body to hold the doors open. This behavior indicates that these people are not only aware of the fact that the subway will not begin moving if the doors are even the slightest bit open, but also that they have figured out how to take advantage of it. There was also one individual who arrived maybe half a second too late to rush inside the car, and then proceeded to kick the door very hard. Voila! It opened! I should try it sometime. Nevertheless, this made me wonder if this was his first time kicking the door, perhaps out of frustration, or if this was a tried and true technique that he had either learned on his own or by observing others as I was observing him.

Electronics Labs

My process for understanding circuits how circuits work week began with studying the schematic drawings before attempting to use a breadboard and various other components to try it out in practice. I think at this point I’m struggling most with visualizing how the structure of the breadboard fits in with the schematic circuit drawings. 

I first started out with getting a single LED to light up, which wasn’t too hard. Mentally, I followed the process of tracking the wire from the 5V source, using another wire to connect it to the horizontal strips on the breadboard, connecting that wire with the anode(+) side of the LED, connecting the cathode(-) side of the LED with a 220ohm resistor, and then connecting the resistor to a wire that led back to the ground. 

I played around with a few different arrangements of this basic concept of a circuit, to try to familiarize myself with the breadboard structure. I was quite intimidated after seeing the demos in class, but once the information started to click it was a lot of fun to play around and try different ways of wiring, adding LED’s, and switches. (I felt like Thomas Edison :P ).

The next challenge I posed for myself was trying to wire multiple LED’s in parallel with one another. Thankfully, I got a helpful note from a classmate before I attempted to make sure that I had a resistor wired in series with each LED. No explosions!! In order to make sure I had actually wired the LED’s in parallel properly, I pulled one out to see if the other would still remain lit. It did!!

As simple as it seems, I found it one of the most helpful things to be turning the breadboard so that the connections were facing the same directions as the schematic I was trying to create. I’m a visual/spatial thinker, so this helped fill in the holes of my mental diagram.

For my creative application of a switch, I wanted to keep it simple conceptually and just thought about it from the perspective of one object coming into contact with another. I knew that these objects would have to be conductive, so I decided to use the two gold bangles on my hand. These objects fulfilled my needs because they are gold, and therefore conductive. 

I thought it would be an interesting “switch” of sorts because when I am moving my hand they are constantly shifting from coming into contact with one another and then separating again. It seemed like an weird study of the motion of my hand, as the on/off flickering of the lights could serve as a sort of indication of whether my hand was moving or not; if the light is flickering it probably means my hand is moving, whereas if it is either on or off my hand is probably still. (Also it was raining really hard outside so I didn’t want to leave the building to find materials and these are two metallic objects that I generally always have on my person. I wonder what other “innovations” were born of laziness or due to limitations??)

Voila! It worked!! I may have had a little too much fun jingling my bangles around and watching the light flicker.