Arm as a Touchscreen

100 %
0 %
Information about Arm as a Touchscreen
Technology

Published on May 4, 2012

Author: abhi230789

Source: slideshare.net

ARM AS A TOUCHSCREEN “ARM AS A TOUCHSCREEN” ABHIJEET S. KAPSETable of ContentsAbstract ........................................................................................................................................021. Introduction ..............................................................................................................................032. Skinput ......................................................................................................................................05 2.1 What is Skinput……………………………………………….……………………….05 2.2 Principle of Skinput……………………………………………….…………………..063. Working Of Skinput ................................................................................................................07 3.1 Pico-projector ..................................................................................................................07 3.2 Bioacoustics.....................................................................................................................08 3.2.1 Transverse Wave Propagation………………………….……………………….09 3.2.2 Longitudinal Wave Propagation……………………….……………….……….09 3.2.3 Bioacoustic Sensor…………….……………………….……………….……….10 3.3 Bluetooth .........................................................................................................................114. Experiments…………………….………………….……………………..…………………..13 4.1 Experimental Conditions ..................................................................................................13 4.2 Analysis ............................................................................................................................19 4.3 BMI Effect ........................................................................................................................215. Advantages………...…………………………………………………………………………236. Disadvantages…………..…………………………………………………………………….247. Applications…………………………………………………………………………………..258. Future Implementation……………………………………………………...………………26Conclusion……………...……………………………………………………………………….27References…………………………………………………………………………...…………..28 1

ARM AS A TOUCHSCREEN ABSTRACTPopularity of mobiles devices increasing day by day due to the advantages like portability,mobility and flexibility, but the limited size gives very less interactive surface area. We cannotjust make the device large without losing benefit of small size. So the Microsoft company hasdeveloped Skinput, a technology that appropriates the human body for acoustic transmission,allowing the skin to be used as an input surface. Human body produces different vibrations whenwe tap on different body parts. With the help of this unique property of human body we can usedifferent locations as different functions of small devices like mobile phones or music players.When we tap on our body some mechanical vibrations propagates through the body thatvibrations are captured by sensor array and with the help of armband we send the signalsproduced by sensors to the mobile devices and the software can detect on which location of ourbody part the finger is tapped. So according to the location, desired operation is performed.When augmented with a Pico-projector, the device can provide a direct manipulation, graphicaluser interface on the body. This approach provides an always available, naturally portable, andon-body finger input system. 2

ARM AS A TOUCHSCREENCHAPTER 1 INTRODUCTIONThe world is going crazy over an invention, which is known as mobile phones. The Mobiledevices became popular in less time due some advantages they came up with, like portability,flexibility, mobility and responsiveness. These devices easily get fit in our pocket means wedon’t need to carry any extra surface area with us. Devices with significant computational powerand capabilities can now be easily carried on our bodies. However, their small size typicallyleads to limited interaction space (e.g. diminutive i.e. very small screens, buttons, and jogwheels) and consequently diminishes their usability and functionality. Since, we cannot simplymake buttons and screens larger without losing the primary benefit of small size.The alternative approaches that enhance interactions with small mobile systems. One option is toopportunistically appropriate surface area from the environment for interactive purposes. Forexample, a technique that allows a small mobile device to turn tables on which it rests into agestural finger input canvas. However, tables are not always present so we cannot use thesetechnique everywhere, and in a mobile context, users are unlikely to want to carry appropriatedsurfaces with them (at this point, one might as well just have a larger device). However, there isone surface that has been previous overlooked as an input canvas and one that happens to alwaystravel with us : our skin.Appropriating the human body as an input device is appealing not only because we have roughlytwo square meters of external surface area, but also because much of it is easily accessible by ourhands (e.g., arms, upper legs, torso).We can use this without any visual contact. Furthermore,proprioception – our sense of how our body is configured in three-dimensional space – allows usto accurately interact with our bodies in an eyes-free manner. For example, we can readily flickeach of our fingers, touch the tip of our nose, and clap our hands together without visualassistance. Few external input devices can claim this accurate, eyes-free input characteristic andprovide such a large interaction area. We can use any part of our body as an input surface but thefor comfortable operation we need to use our arm as an input. In this paper, we present our work 3

ARM AS A TOUCHSCREENon Skinput – a method that allows the body to be appropriated for finger input using a novel,non-invasive, wearable bio-acoustic sensor.The technology was developed by Chris Harrison, Desney Tan, and Dan Morris, at MicrosoftResearchs Computational User Experiences Group. Skinput is a combination of threetechnologies which are pico-projector, bioacoustics sensors and Bluetooth. Pico-projector willdisplay mobile screen on our skin. As according to our need we tap on our body. After tappingsome vibrations are produced through our body, those ripples are captured by bioacousticssensors which are mounted armband. These armband is connected to the mobile device bywireless connection i.e. Bluetooth. Mobile device consists of a software which matches thesevibration signal with the store signals and desired operation is performed. We have use SupportVector Machine algorithm i.e. supervised learning algorithm to train our software. At initialstage we have to store the signal data from each location of our arm which is the reference signalfor our software. Skinput employs acoustics, which take advantage of the human bodys naturalsound conductive properties (e.g., bone conduction). This allows the body to be annexed as aninput surface without the need for the skin to be invasively instrumented with sensors, trackingmarkers, or other items.The contributions of this paper are: The description of the design of a novel, wearable sensor forbio-acoustic signal acquisition. Also the description of an analysis approach that enables skinputsystem to resolve the location of finger taps on the body. In this paper, we present working onskinput—a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor. When coupled with a pico-projector, the skin can operateas an interactive canvas supporting both input and graphical output. 4

ARM AS A TOUCHSCREENCHAPTER 2 Skinput2.1 What Is SkinputTouch screens have revolutionized the way we communicate with electronics, but sometimesthey can get a little cramped — wouldn’t it be great if the iPhone’s screen was just a little bitbigger? One creative solution is Skinput, a device that uses a pico projector to beam graphics(keyboards, menus, etc.) onto a user’s palm and forearm, transforming the skin into a computerinterface. Skinput is a combination of two words i.e Skin and Input. This technology uses largestpart of our body which is skin as an input surface for mobile gadgets. Chris Harrison and team ofMicrosoft research has developed Skinput, a way in which your skin can become a touch screendevice or your fingers buttons on a MP3 controller. Figure 1: Display on palm using Skinput Technology 5

ARM AS A TOUCHSCREENSkinput represents one way to decouple input from electronic devices with the aim of allowingdevices to become smaller without simultaneously shrinking the surface area on which input canbe performed.2.2 Principle Of SkinputDue to a unique structure of the arm, along with varying bone thickness, muscle or fat tissueconcentrations and the like, each tap in different places along the arm delivers a uniquecombination of transverse and longitudinal waves up the arm, to the torso. Transverse waves arethe ripples of lose skin, expanding away from the point of impact. Longitudinal waves arevibrations emitted by the (recently struck) bone along its entire length, from the center of the armtowards the skin.Skinput relies on an armband, currently worn around the biceps. It detects vibrations in the armand compares them with predefined control commands (e.g. up, down, back, enter).Additionally, thanks for the sense of proprioception (the ability to sense the position of our bodyparts without looking), Skinput does not preoccupy the users vision (much like touch typing).The current Skinput prototype build relies on a series of arrays of small, cantilevered piezo films(MiniSense100, Measurement Specialties, Inc.). This setup was found favourable for measuringthe specific wave frequencies and providing a satisfactory noise-to-signal ratio. The sensorsoutput acoustic wave signals, which are then processed, segmented and classified by the softwarein order to execute a predefined command. 6

ARM AS A TOUCHSCREENCHAPTER 3 Working Of Skinput3.1 Pico-ProjectorPico projectors are tiny battery powered projectors - as small as a mobile phone - or evensmaller: these projectors can even be embedded inside phones or digital cameras. Pico-projectorsare small, but they can show large displays (sometimes up to 100"). While great for mobility andcontent sharing, pico-projectors offer low brightness and resolution compared to largerprojectors. It is a new innovation, but pico-projectors are already selling at a rate of about amillion units a year (in 2010), and the market is expected to continue growing quickly. Figure 2: Pico-projectorWe are using DLP (Digital Light Processing) - pioneered by TI, the idea behind DLP is to usetiny mirrors on a chip that direct the light. Each mirror controls the amount of light each pixel on 7

ARM AS A TOUCHSCREENthe target picture gets (the mirror has two states, on and off. It refreshes many times in a second -and if 50% of the times it is on, then the pixel appears at 50% the brightness). Color is achievedby a using a color wheel between the light source and the mirrors - this splits the light inred/green/blue, and each mirror controls all thee light beams for its designated pixel. So with thehelp of tiny projector we will display required menu bar on our arm.3.2 Bio-AcousticsAcoustics is the interdisciplinary science that deals with the study of all mechanical waves ingases, liquids, and solids including vibration, sound, ultrasound and infrasound. A scientist whoworks in the field of acoustics is an acoustician while someone working in the field of acousticstechnology may be called an acoustical engineer. The application of acoustics can be seen inalmost all aspects of modern society with the most obvious being the audio and noise controlindustries. Bioacoustics is a cross-disciplinary science that combines biology and acoustics.Usually it refers to the investigation of sound production, dispersion through elastic media, andreception in animals, including humans.When a finger taps the skin, several distinct forms of acoustic energy are produced. Some energyis radiated into the air as sound waves; this energy is not captured by the Skinput system. Amongthe acoustic energy transmitted through the arm, the most readily visible are transverse waves,created by the displacement of the skin from a finger impact. When shot with a high-speedcamera, these appear as ripples, which propagate outward from the point of contact. Theamplitude of these ripples is correlated to both the tapping force and to the volume andcompliance of soft tissues under the impact area. In general, tapping on soft regions of the armcreates higher amplitude transverse waves than tapping on boney areas (e.g., wrist, palm,fingers), which have negligible compliance.In addition to the energy that propagates on the surface of the arm, some energy is transmittedinward, toward the skeleton. These longitudinal (compressive) waves travel through the softtissues of the arm, exciting the bone, which is much less deformable then the soft tissue but canrespond to mechanical excitation by rotating and translating as a rigid body. This excitation 8

ARM AS A TOUCHSCREENvibrates soft tissues surrounding the entire length of the bone, resulting in new longitudinalwaves that propagate outward to the skin.3.2.1 Transverse Wave Propagation : Figure 3: Finger impacts displace the skin, creating transverse waves (ripples). The sensor is activated as the wave passes underneath it.3.2.2 Longitudinal Wave Propagation : Figure 4: Finger impacts create longitudinal (compressive) waves that cause internal skeletal structures to vibrate. This, in turn, creates longitudinal waves that emanate outwards from the bone (along its entire length) toward the skin. 9

ARM AS A TOUCHSCREENWe highlight these two separate forms of conduction, transverse waves moving directly alongthe arm surface, and longitudinal waves moving into and out of the bone through soft tissues –because these mechanisms carry energy at different frequencies and over different distances.Roughly speaking, higher frequencies propagate more readily through bone than through softtissue, and bone conduction carries energy over larger distances than soft tissue conduction.While we do not explicitly model the specific mechanisms of conduction, or depend on thesemechanisms for our analysis, we do believe the success of our technique depends on the complexacoustic patterns that result from mixtures of these modalities. Similarly, we also believe thatjoints play an important role in making tapped locations acoustically distinct. Bones are heldtogether by ligaments, and joints often include additional biological structures such as fluidcavities. This makes joints behave as acoustic filters. In some cases, these may simply dampenacoustics; in other cases, these will selectively attenuate specific frequencies, creating locationspecific acoustic signatures. Figure 5: Arm band which consists of vibration sensor array3.2.3 Bioacoustic Sensor: 10

ARM AS A TOUCHSCREENThe Minisense 100 is a low-cost cantilever-type vibration sensor loaded by a mass to offer highsensitivity at low frequencies. The pins are designed for easy installation and are solderable.Horizontal and vertical mounting options are offered as well as a reduced height version. Theactive sensor area is shielded for improved RFI/EMI rejection. Rugged, flexible PVDF sensingelement withstands high shock overload. Sensor has excellent linearity and dynamic range, andmay be used for detecting either continuous vibration or impacts. Some features of Minisense 100 are given below: High Voltage Sensitivity (1 V/g) Over 5 V/g at Resonance Horizontal or Vertical Mounting Shielded Construction Solderable Pins, PCB Mounting Low Cost < 1% Linearity Up to 40 Hz (2,400 rpm) Operation Below Resonance3.3 Bluetooth ;Bluetooth is a wireless technology standard for exchanging data over short distances (usingshort-wavelength radio transmissions in the ISM band from 2400–2480 MHz) from fixed andmobile devices, creating personal area networks (PANs) with high levels of security. Created bytelecoms vendor Ericsson in 1994, it was originally conceived as a wireless alternative to RS-232data cables. It can connect several devices, overcoming problems of synchronization. Bluetoothtakes small-area networking to the next level by removing the need for user intervention andkeeping transmission power extremely low to save battery power.Bluetooth is essentially a networking standard that works at two levels: It provides agreement at the physical level -- Bluetooth is a radio-frequency standard. 11

ARM AS A TOUCHSCREEN It provides agreement at the protocol level, where products have to agree on when bits are sent, how many will be sent at a time, and how the parties in a conversation can be sure that the message received is the same as the message sent.Bluetooth is intended to get around the problems that come with infrared systems. The olderBluetooth 1.0 standard has a maximum transfer speed of 1 megabit per second (Mbps), whileBluetooth 2.0 can manage up to 3 Mbps. Bluetooth 2.0 is backward-compatible with 1.0 devices.One of the ways Bluetooth devices avoid interfering with other systems is by sending out veryweak signals of about 1 milliwatt. By comparison, the most powerful cell phones can transmit asignal of 3 watts. The low power limits the range of a Bluetooth device to about 10 meters (32feet), cutting the chances of interference between your computer system and your portabletelephone or television. Even with the low power, Bluetooth doesnt require line of sight betweencommunicating devices. The walls in your house wont stop a Bluetooth signal, making thestandard useful for controlling several devices in different rooms.Bluetooth can connect up to eight devices simultaneously. With all of those devices in the same10-meter (32-foot) radius, you might think theyd interfere with one another, but its unlikely.Bluetooth uses a technique called spread-spectrum frequency hopping that makes it rare for morethan one device to be transmitting on the same frequency at the same time. In this technique, adevice will use 79 individual, randomly chosen frequencies within a designated range, changingfrom one to another on a regular basis. In the case of Bluetooth, the transmitters changefrequencies 1,600 times every second, meaning that more devices can make full use of a limitedslice of the radio spectrum. Since every Bluetooth transmitter uses spread-spectrum transmittingautomatically, it’s unlikely that two transmitters will be on the same frequency at the same time.This same technique minimizes the risk that portable phones or baby monitors will disruptBluetooth devices, since any interference on a particular frequency will last only a tiny fractionof a second.So we are connecting armband and mobile device using Bluetooth technology. So whatever datais received by the sensors are transferred to the mobile device. That mobile device samples thedata and compared it with the stored data and according to the algorithm task is performed. 12

ARM AS A TOUCHSCREENCHAPTER 4 Experiments4.1 Experimental Conditions :To evaluate the performance of our system, they recruited 13 participants (7 female) from theGreater Seattle area. These participants represented a diverse crosssection of potential ages andbody types. Ages ranged from 20 to 56 (mean 38.3), and computed body mass indexes (BMIs)ranged from 20.5 (normal) to 31.9 (obese).We selected three input groupings from the multitude of possible location combinations to test.We believe that these groupings, illustrated in Figure 7, are of particular interest with respect tointerface design, and at the same time, push the limits of our sensing capability. From these threegroupings, we derived five different experimental conditions, described below. Fingers (FiveLocations) one set of gestures we tested had participants tapping on the tips of each of their fivefingers (Figure ―Fingers‖). The fingers offer interesting affordances that make them compellingto appropriate for input. Foremost, they provide clearly discrete interaction points, which areeven already well-named (e.g., ring finger). In addition to five finger tips, there are 14 knuckles(five major, nine minor), which, taken together, could offer 19 readily identifiable input locationson the fingers alone. Second, we have exceptional finger to finger dexterity, as demonstratedwhen we count by tapping on our fingers. Finally, the fingers are linearly ordered, which ispotentially useful for interfaces like number entry, magnitude control (e.g., volume), and menuselection. At the same time, fingers are among the most uniform appendages on the body, withall but the thumb sharing a similar skeletal and muscular structure. This drastically reduces 13

ARM AS A TOUCHSCREENacoustic variation and makes differentiating among them difficult. Additionally, acousticinformation must cross as many as five (finger and wrist) joints to reach the forearm, whichfurther dampens signals. For this experimental condition, we thus decided to place the sensorarrays on the forearm, just below the elbow. Despite these difficulties, pilot experiments showedmeasureable acoustic differences among fingers, which we theorize is primarily related to fingerlength and thickness, interactions with the complex structure of the wrist bones, and variations inthe acoustic transmission properties of the muscles extending from the fingers to the forearm.Whole Arm (Five Locations) Another gesture set investigated the use of five input locations Onthe forearm and hand: arm, wrist, palm, thumb and middle finger (Figure, ―Whole Arm‖). Weselected these locations for two important reasons. First, they are distinct and named parts of thebody (e.g., ―wrist‖). This allowed participants to accurately tap these locations without trainingor markings. Additionally, these locations proved to be acoustically distinct during piloting, withthe large spatial spread of input points offering further variation. We used these locations in threedifferent conditions.One condition placed the sensor above the elbow, while another placed it below. This wasincorporated into the experiment to measure the accuracy loss across this significant articulationpoint (the elbow). Additionally, participants repeated the lower placement condition in an eyes-free context: participants were told to close their eyes and face forward, both for training andtesting. This condition was included to gauge how well users could target on-body inputlocations in an eyes-free context (e.g., driving). Forearm (Ten Locations) In an effort to assessthe upper bound of our approach’s sensing resolution, our fifth and final experimental conditionused ten locations on just the forearm (Figure 6, ―Forearm‖). Not only was this a very highdensity of input locations (unlike the whole-arm condition), but it also relied on an input surface(the forearm) with a high degree of physical uniformity (unlike, e.g., the hand). We expected thatthese factors would make acoustic sensing difficult. Moreover, this location was compelling dueto its large and flat surface area, as well as its immediate accessibility, both visually and forfinger input. Simultaneously, this makes for an ideal projection surface for dynamic interfaces.To maximize the surface area for input, we placed the sensor above the elbow, leaving the entireforearm free. Rather than naming the input locations, as was done in the previously describedconditions, we employed small, colored stickers to mark input targets. This was both to reduceconfusion (since locations on the forearm do not have common names) and to increase input 14

ARM AS A TOUCHSCREENconsistency. As mentioned previously, we believe the forearm is ideal for projected interfaceelements; the stickers served as low tech placeholders for projected.Design and Setup :We employed a within-subjects design, with each participant performing tasks in each of the fiveconditions in randomized order: five fingers with sensors below elbow; five points on the wholearm with the sensors above the elbow; the same points with sensors below the elbow, bothsighted and blind; and ten marked points on the forearm with the sensors above the elbow.Participants were seated in a conventional office chair, in front of a desktop computer thatpresented stimuli. For conditions with sensors below the elbow, we placed the armband~3cmaway from the elbow, with one sensor package near the radius and the other near the ulna. Forconditions with the sensors above the elbow, we placed the armband ~7cm above the elbow,such that one sensor package rested on the biceps. Right-handed participants had the armbandplaced on the left arm, which allowed them to use their dominant hand for finger input. For theone left-handed participant, we flipped the setup, which had no apparent effect on the operationof the system. Tightness of the armband was adjusted to be firm, but comfortable. Whileperforming tasks, participants could place their elbow on the desk, tucked against their body, oron the chair’s adjustable armrest; most chose the latter.Procedure :For each condition, the experimenter walked through the input locations to be tested anddemonstrated finger taps on each. Participants practiced duplicating these motions forapproximately one minute with each gesture set. This allowed participants to familiarizethemselves with our naming conventions (e.g. ―pinky‖, ―wrist‖), and to practice tapping theirarm and hands with a finger on the opposite hand. It also allowed us to convey the appropriatetap force to participants, who often initially tapped unnecessarily hard. To train the system,participants were instructed to comfortably tap each location ten times, with a finger of theirchoosing. This constituted one training round. In total, three rounds of training data werecollected per input location set (30 examples per location, 150 data points total). An exception tothis procedure was in the case of the ten forearm locations, where only two rounds were 15

ARM AS A TOUCHSCREENcollected to save time (20 examples per location, 200 data points total). Total training time foreach experimental condition was approximately three minutes. We used the training data to buildan SVM classifier. During the subsequent testing phase, we presented participants with simpletext stimuli (e.g. ―tap your wrist‖), which instructed them where to tap. The order of stimuli wasrandomized, with each location appearing ten times in total. The system performed real-timesegmentation and classification, and provided immediate feedback to the participant (e.g. ―youtapped your wrist‖). We provided feedback so that participants could see where the system wasmaking errors (as they would if using a real application). Figure 6: Accuracy of the three whole-arm-centric conditions. Error bars represent standard deviation.If an input was not segmented (i.e. the tap was too quiet), participants could see this and wouldsimply tap again. Overall, segmentation error rates were negligible in all conditions, and notincluded in further analysis. In this section, we report on the classification accuracies for the testphases in the five different conditions. Overall, classification rates were high, with an averageaccuracy across conditions of 87.6%. Additionally, we present preliminary results exploring thecorrelation between classification accuracy and factors such as BMI, age, and sex. 16

ARM AS A TOUCHSCREENFive FingersDespite multiple joint crossings and ~40cm of separation between the input targets and sensors,classification accuracy remained high for the five-finger condition, averaging 87.7%(SD=10.0%, chance=20%) across participants. Segmentation, as in other conditions, wasessentially perfect. Inspection of the confusion matrices showed no systematic errors in theclassification, with errors tending to be evenly distributed over the other digits. Whenclassification was incorrect, the system believed the input to be an adjacent finger 60.5% of thetime; only marginally above prior probability (40%). This suggests there are only limitedacoustic continuities between the fingers. The only potential exception to this was in the case ofthe pinky, where the ring finger constituted 63.3% percent of the misclassifications.Whole ArmParticipants performed three conditions with the whole-arm location configuration. The below-elbow placement performed the best, posting a 95.5% (SD=5.1%, chance=20%) averageaccuracy. This is not surprising, as this condition placed the sensors closer to the input targetsthan the other conditions. Moving the sensor above the elbow reduced accuracy to 88.3%(SD=7.8%, chance=20%), a drop of 7.2%. This is almost certainly related to the acoustic loss atthe elbow joint and the additional 10cm of distance between the sensor and input targets. Figure8 shows these results. The eyes-free input condition yielded lower accuracies than otherconditions, averaging 85.0% (SD=9.4%, chance=20%). This represents a 10.5% drop from itsvision assisted, but otherwise identical counterpart condition. It was apparent from watchingparticipants complete this condition that targeting precision was reduced. In sighted conditions,participants appeared to be able to tap locations with perhaps a 2cm radius of error. Although notformally captured, this margin of error appeared to double or triple when the eyes were closed.We believe that additional training data, which better covers the increased input variability,would remove much of this deficit. We would also caution designers developing eyes-free, on-body interfaces to carefully consider the locations participants can tap accurately. 17

ARM AS A TOUCHSCREENFigure 7: Higher accuracies can be achieved by collapsing the 10 input locations into groups. A–E and G were designed to be spatially intuitive. F was created following analysis of per-location accuracy data.Higher accuracies can be achieved by collapsing the ten input locations into groups. First twowere created using a design-centric strategy. Last figure was created following analysis of per-location accuracy data.ForearmClassification accuracy for the ten-location forearm condition stood at 81.5% (SD=10.5%,chance=10%), a surprisingly strong result for an input set we devised to push our system’ssensing limit (K=0.72, considered very strong). Following the experiment, we considereddifferent ways to improve accuracy by collapsing the ten locations into larger input groupings.The goal of this exercise was to explore the tradeoff between classification accuracy and numberof input locations on the forearm, which represents a particularly valuable input surface forapplication designers. We grouped targets into sets based on what we believed to be logicalspatial groupings. In addition to exploring classification accuracies for layouts that weconsidered to be intuitive, we also performed an exhaustive search over all possible groupings.For most location counts, this search confirmed that our intuitive groupings were optimal;however, this search revealed one plausible, although irregular, layout with high accuracy at sixinput locations (Figure 9, F). Unlike in the five-fingers condition, there appeared to be shared 18

ARM AS A TOUCHSCREENacoustic traits that led to a higher likelihood of confusion with adjacent targets than distant ones.This effect was more prominent laterally than longitudinally. Figure illustrates this with lateralgroupings consistently outperforming similarly arranged, longitudinal groupings (B and C vs. Dand E). This is unsurprising given the morphology of the arm, with a high degree of bilateralsymmetry along the long axis.4.2 ANALYSIS :The audio stream was segmented into individual taps using an absolute exponential average ofall sensor channels (Figure, red waveform). When an intensity threshold was exceeded (Figure,upper blue line), the program recorded the timestamp as a potential start of a tap. If the intensitydid not fall below a second, independent ―closing‖ threshold (Figure, lower purple line) between100 and 700 ms after the onset crossing (a duration we found to be the common for fingerimpacts), the event was discarded. If start and end crossings were detected that satisfied thesecriteria, the acoustic data in that period (plus a 60 ms buffer on either end) was considered aninput event (Figure , vertical green regions). Although simple, this heuristic proved to be robust.After an input has been segmented, the waveforms are analyzed. We employ a brute forcemachine learning approach, computing 186 features in total, many of which are derivedcombinatorially. For gross information, we include the average amplitude, standard deviationand total (absolute) energy of the waveforms in each channel (30 features). From these, wecalculate all average amplitude ratios between channel pairs (45 features). We also include anaverage of these ratios (1 feature). We calculate a 256-point FFT for all 10 channels, althoughonly the lower 10 values are used (representing the acoustic power from 0 to 193 Hz), yielding100 features. These are normalized by the highest amplitude FFT value found on any channel. 19

ARM AS A TOUCHSCREEN Figure 8: Ten channels of acoustic data generated by three finger taps on the forearm, followed by three taps on the wrist. The exponential average of the channels is shown in red. Segmentedinput windows are highlighted in green. Note how different sensing elements are activated by the two locationsWe also include the center of mass of the power spectrum within the same 0–193 Hz range foreach channel, a rough estimation of the fundamental frequency of the signal displacing eachsensor (10 features). Subsequent feature selection established the all-pairs amplitude ratios andcertain bands of the FFT to be the most predictive features. These 186 features are passed to asupport vector machine (SVM) classifier. A full description of SVMs is beyond the scope of thispaper (see Burges3 for a tutorial). Our software uses the implementation provided in the Wekamachine learning toolkit.26 It should be noted, however, that other, more sophisticatedclassification techniques and features could be employed. Thus, the results presented in thispaper should be considered a baseline. Before the SVM can classify input instances, it must firstbe trained to the user and the sensor position. This stage requires the collection of severalexamples for each input location of interest. When using Skinput to recognize live input, thesame 186 acoustic features are computed on the-fly for each segmented input. These are fed intothe trained SVM for classification. We use an event model in our software—once an input isclassified, an event associated with that location is instantiated. Any interactive features bound tothat event are fired. 20

ARM AS A TOUCHSCREENSVMA support vector machine (SVM) is a concept in statistics and computer science for a set ofrelated supervised learning methods that analyze data and recognize patterns, used forclassification and regression analysis. The standard SVM takes a set of input data and predicts,for each given input, which of two possible classes forms the input, making the SVM a non-probabilistic binary linear classifier. Given a set of training examples, each marked as belongingto one of two categories, an SVM training algorithm builds a model that assigns new examplesinto one category or the other. An SVM model is a representation of the examples as points inspace, mapped so that the examples of the separate categories are divided by a clear gap that is aswide as possible. New examples are then mapped into that same space and predicted to belong toa category based on which side of the gap they fall on.4.3 BMI EFFECT:Early on, we suspected that our acoustic approach was susceptible to variations in bodycomposition. This included, most notably, the prevalence of fatty tissues and the density/ mass ofbones. These, respectively, tend to dampen or facilitate the transmission of acoustic energy in thebody. To assess how these variations affected our sensing accuracy, we calculated eachparticipant’s body mass index (BMI) from self-reported weight and height. Data & observationsfrom the experiment suggest that high BMI is correlated with decreased accuracies.Figure 9: Accuracy was significantly lower for participants with BMIs above the 50th percentile. 21

ARM AS A TOUCHSCREENThe participants with the three highest BMIs (29.2, 29.6, and 31.9 – representing borderlineproduced the three lowest average accuracies. Figure illustrates this significant disparity hereparticipants are separated into two groups, those with BMI greater and less than the US nationalmedian, age and sex adjusted [5] (F1,12=8.65, p=.013). Other factors such as age and sex, whichmay be correlated to BMI in specific populations, might also exhibit a correlation withclassification accuracy. For example, in our participant pool, males yielded higher classificationaccuracies than females, but we expect that this is an artifact of BMI correlation in our sample,and probably not an effect of sex directly. 22

ARM AS A TOUCHSCREENCHAPTER 5 Advantages Easy to work: Skinput technology is very easy to understand and it’s very easy to use, it takes only 20 mins to figure out how to work it. No interaction with the gadget : If we have to use any application of our mobile then we reach to our pocket take out the device, unlock it and then go to the application. By using Skinput we do not need any interaction with the gadget. We have to just tap our finger and the desired function will performed by the system. No worry about keypad : People with large fingers gets trouble while operating touch screens. Using Skinput we get very large interaction surface area. So for such people this problem will resolve. Without Visual Contact : For some operations like music players we need only 4-5 buttons. So we can use each fingertip as a button. For such operation we don’t any display. We can operate such functions without any visual contact. Easy to access when your phone is not available. Allows users to interact more personally with their device. Larger buttons to reduce the risk of pressing the wrong buttons. Through the use of a sense called proprioception after user learns where the locations are on the skin they will no longer have to look down to use Skinput reducing people looking down at their phone while driving. It can be used for a more interactive gaming experience. 23

ARM AS A TOUCHSCREENCHAPTER 6 Disadvantages Skinput has its downfalls, especially due to fact of the BIG band that looks very easy to put on. Many people would not wear a very big band around their arm for the day just to have this product. Everybody can’t use this product, the elderly for example have a hard time adapting to technology as it is. We also have to take in consideration the inconvenience it would cause to people with invisible disabilities. This technology only works on direct skin exposure. We cannot use full sleeves shirt when we are using this technology. Currently there are only five buttons with accuracy more than 95%. A phone uses at least 10 buttons to dial a phone number or send a text message. So in such cases it will create problem. The easy accessibility will cause people to be more socially distracted. If the user has more than a 30% Body Mass Index, Skinput is reduced to 80% accuracy. The arm band is currently bulky. The visibility of the projection of the buttons on the skin can be reduced if the user has a tattoo located on their arm 24

ARM AS A TOUCHSCREENCHAPTER 7 ApplicationsWe can use Skinput technology in any mobile device. We just need different software fordifferent mobiles like for mobiles which supports android operating system requires androidapplication or Symbian operating system requires .jar or .sis software.We can use this technology in i-pods or other music devices which supports Bluetoothtechnology. For such music devices we just need 4 or 5 different buttons. So we can use ourfingertips as input. Like this we can operate these devices without any visual contact.In gaming devices we can use this technology. So without any joysticks or touch screens we canplay games very easily.Person with physical disabilities can operate this system very easily.For simpler browsing system which require less number of buttons (maximum 10) can bereplaced by this technology. 25

ARM AS A TOUCHSCREENCHAPTER 8 FUTURE IMPLEMENTATIONIn order to assess the real-world practicality of Skinput, we are currently building a successor toour prototype that will incorporate several additional sensors, particularly electrical sensors andinertial sensors (accelerometers and gyroscopes). In addition to expanding the gesture vocabularybeyond taps, we expect this sensor fusion to allow considerably more accuracy—and morerobustness to false positives—than each sensor alone. This revision of our prototype will alsoallow us to benefit from anecdotal lessons learned since building our first prototype: inparticular, early experiments with subsequent prototypes suggest that the hardware filtering wedescribe above can be effectively replicated in software, allowing us to replace our relativelylarge piezoelectric sensors with micro-machined accelerometers.This considerably reduces the size and electrical complexity of our armband. Furthermore,anecdotal evidence has also suggested that vibration frequency ranges as high as severalkilohertz may contribute to tap classification, further motivating the use of broadbandaccelerometers. Finally, our multi-sensor armband will be wireless, allowing us to explore a widevariety of usage scenarios, as well as our general assertion that always-available input willinspire radically new computing paradigms. 26

ARM AS A TOUCHSCREEN ConclusionIn this paper, we have presented our approach to appropriating the human body as an inputsurface. We have described a novel, wearable bio-acoustic sensing array that we built into anarmband in order to detect and localize finger taps on the forearm and hand. Results fromexperiments have shown that the system performs very well for a series of gestures, even whenthe body is in motion. Additionally, we have presented initial results demonstrating otherpotential uses of our approach, which we hope to further explore in future work. These includesingle-handed gestures, taps with different parts of the finger, and differentiating betweenmaterials and objects. We conclude with descriptions of several prototype applications thatdemonstrate the rich design space we believe Skinput enables. 27

ARM AS A TOUCHSCREEN REFERENCES1) Chris Harrison, Desney Tan, and Dan Morris ―Skinput: Appropriating the Skin as an Interactive Canvas‖ Microsoft Research 2011.2) Chris Harrison, Scott E. Hudson ―Scratch Input: Creating, Large Inexpensive, Unpowered and Mobile Finger Input Surfaces‖UIST 2008.3) Amento, B.Hill, W.Terveen ―The Sound of one Hand: A wrist- mounted bio-acoustic fingertip gesture interface‖ CHI’02.4) Thomas Hahn ―Future Human Computer Interaction with special focus on input and output techniques‖ HCI March 2006.5) Burges, C.J. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2.2, June 1998, 121-167.6) Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults. National Heart, Lung and Blood Institute. Jun. 17, 1998.7) Deyle, T., Palinko, S., Poole, E.S., and Starner, T. Hambone: A Bio-Acoustic Gesture Interface. In Proc. ISWC 07. 1-8.8) Erol, A., Bebis, G., Nicolescu, M., Boyle, R.D., and Twombly, X. Vision-based hand pose estimation: A review. Computer Vision and Image Understanding. 108, Oct., 2007.9) Fabiani, G.E. McFarland, D.J. Wolpaw, J.R. and Pfurtscheller, G. Conversion of EEG activity into cursor movement by a brain-computer interface (BCI). IEEE Trans. on Neural Systems and Rehabilitation Engineering, 12.3, 331-8. Sept. 2004.10) Grimes, D., Tan, D., Hudson, S.E., Shenoy, P., and Rao, R. Feasibility and pragmatics of classifying working memory load with an electroencephalograph. Proc. CHI ’08, 835- 844.11) Harrison, C., and Hudson, S.E. Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile finger Input Surfaces. In Proc. UIST ’08, 205-208. 28

Add a comment

Related presentations

Related pages

Touchscreen - Wikipedia, the free encyclopedia

A touchscreen is an input device normally layered on the top of an ... Allowing the user to rest their hand or arm on the input device or a frame around ...
Read more

Body acoustics can turn your arm into a touchscreen | New ...

No more fumbling with tiny touchpads: by combining acoustic sensors and a mini-projector, you can now have a keypad on your arm
Read more

Skinput Turns Your Arm into a Touch-Screen | WIRED

Skinput Turns Your Arm into a Touch-Screen. Skinput uses a bio-acoustic sensing array coupled with a wrist-mounted pico-projector to turn your ...
Read more

Talk:Touchscreen - Wikipedia, the free encyclopedia

Talk:Touchscreen This is the talk ... An early form of touch screen was invented at the Royal Radar ... a study on effect of touchscreen inclination on arm ...
Read more

Touchscreen am Arm - wetalkwithyou.com

Wenn es nach dem Gründerteam von Cicret geht, tragen wir schon bald unseren Handy Display als Touchscreen am Arm.
Read more

The bracelet that turns your ARM into a touchscreen ...

The bracelet that turns your ARM into a touchscreen: Cicret projects emails, videos and games onto skin. Concept Cicret bracelet is designed to replace a ...
Read more

Der Touchscreen-Arm | Technology Review

Was kommt als nächstes bei den Bedienoberflächen? Nachdem die Maus bekanntlich neulich 40 wurde, stellen sich Interface-Designer die Frage, wie sie den ...
Read more

Using Windows 8 on touchscreen laptops could be a pain in ...

A ccording to Microsoft, the great leap forward about Windows 8 is its ability to be used with a touchscreen. Where Windows 7 could be squeezed into touch ...
Read more