Milestones

The first draft of the paper is done! It comes out at about 12 pages. I’ll need to cut it down to 6 to submit for CHI 2014 WIP. Easier than writing though. Of course, that’s just the first draft. More to come, I’m guessing. Still, it’s a nice feeling, and since I’ve burned through most of my 20% time, it’s time for me to get back to actually earning my pay, so I’ll be taking a break from this blog for a while. More projects are coming up though, so stay tuned. I’ll finish up this post with some images of all the design variations that led to the final, working version:

Prototype Evolution

Prototype Evolution (click to enbiggen)

The chronological order of development is from left to right and top to bottom. Starting at the top left:

  • The first proof of concept. Originally force-input / motion – feedback. It was with this system that I discovered that all actuator motion had to be in relation to a proximal relative base.
  • The first prototype. It had 6 Degrees of freedom, allowing for a user to move a gripper within a 3D environment and grab items. It worked well enough that it led to…
  • The second prototype. A full 5-finger gripper attached to an XYZ base. I ran into problems with this one. It turned out that motion feedback required too much of a cognitive load to work. The user would loose track of where their fingers were, even with the proximal base. So that led to…
  • The third prototype. This used resistive force sensors and vibrotactile feedback. The feedback was provided using voice coils, which were capable of full audio range, which meant that all kinds of sophisticated contact and surface effects could be provided. That proved the point that 5 fingers could work with vibrotactile feedback, but the large scale motions of the base seemed to need motion (I’ve since learned that isometric devices are most effective over short ranges). This was also loaded with electronic concepts that I wanted to try out – Arduino sensing, midi synthesizers per finger, etc.
  • To explore direct motion for the base for the fourth prototype I made a 3D printing of a 5-finger Force Input / Vibrotactile Output (FS/VO) system that would sit on top of a mouse. This was a plug-and play substitution that worked with the previous electronics and worked quite nicely, though the ability to grip doesn’t give you much to do in the XY plane
  • To Get 3D interaction, I took two FS/VO modules and added them to a Phantom Omni. I also dropped the arduino and the synthesizer and the Arduino, using XAudio2 8-channel audio and a Phidgets interface card. This system worked very nicely. The FS/VO elements combined with a force feedback base turned out to be very effective. That’s what became the basis for the paper, and hopefully the basis for future work.
  • Project code is here (MD5: B32EE89CEA9C8E02E5B99BFAF24877A0).
Advertisements

Packaging!

Ok, here it is, all ready to travel:

IMG_2192

It’s still a bit of a rat’s nest inside the box, but I’ll clean that up later today.

Adding a “practice mode” to the app. It will read in a setup file and allow the user to try any of the feedback modalities using srand(current milliseconds) – done

Sent an email off to these folks asking how to get their C2-transducers.

Need to look into perceptual equivalence WRT speech/nonspeech tactile interaction. Here’s one paper that might help: http://www.haskins.yale.edu/Reprints/HL0334.pdf

Fixed my truculent pressure sensor and glued the components into the enclosure. Need to order a power strip.

Blew my hand off for a while

I’m in the process of turning the Phantom testbed code into a research tool. This means that a lot of items that have been #defines now need to be variables and such.

One of the mechanisms that the shared memory app uses to communicate is a char[255] message. I basically sprintf whatever I want into that, and I can then debug both applications simultaneously.

However, after checking to see that some data were coming across correctly, I took the formatting argument out of the sprintf statement and left the value in. Suddenly I was overflowing the 255 limit and causing all kinds of havoc. Took a few hours to chase that one down. That’s what you get for playing with C/C++. Moving on.

Anyway, I now have an event handling loop, and am able to load target spheres into the application and associate them with a sound file. Tommorrow we’ll try getting the sounds associated with the targets to play. There are some issues, primarily that the gripper can touch multiple targets simultaneously. Still, it looks pretty straightforward. After that I’ll start to roll in the TestManager and TestResults classes into the application.

The other thing to do for the day is to check out the headset code with Brian this evening in the lab and see if the output file bug has either disappeared or can be replicated.

Deadlines and schedules

I was just asked to see how many hours I have left for working this research. It turns out at the rate I’m going, that I can continue until mid-October. This is basically a big shout-out to Novetta, who has granted a continuation of my 20% time that was originally a hiring condition when I went to work for Edge. Thanks. And if you’d like a programming job in the DC area that supports creativity, give them a call.

I just can’t make the audio code break in writing out results. Odd. Maybe a corrupt input file can have unforeseen effects? Regardless, I’m going to stop pursuing this particular bug without more information

Fixing the state problem. Done.

Fixing the saving issue. Also changing the naming of the speakers to reflect Dolby or not. Done.

New version release built and deployed.

And back to Phantom++

TestScreenV1

I started to add in the user interface that will support experiments. Since it was already done, I pulled in most of the Fluid code from the Vibrotactile headset, which made things pretty easy. I needed to add an enclosing control system class that can move commande between the various pieces.

I’ve also decided that each sound will have an associated object with it. This allows each object to have a simple “acoustic” texture that doesn’t require any fancy data structure.

At this point, I’m estimating that the first version of the test program should be ready by Friday.

Sounds like Deja Vu.

Adding custom speaker number and placement as per Dr. Kuber’s request.

Looks like dot product should do the trick: DotProduct

Done! With only a couple of string compare issues. I also had to make the speaker index jump around the subwoofer channel until I can work out how to set the EQ.

And it looks like there are bugs in the code. It seems that you cannot do zero speed sessions. And the writing out of results with multiple sound files looks pretty confused. I’m not sure if extra CRs are being put in there or if some of the data isn’t being written out. Need to run some more examples.

Pulling everything apart and putting it back together

  • Adding multiple sound playback
    • Rework the output to handle multiple sounds. This means one TestResult per sound. However, the result cannon be associated with a sound, so for each release, all the emitter sources will have to be included. Later analysis can be used to determine the best fit. Note also that the number of attempts may be greater or lesser than the number of emitters.
  • Need to use the XML to write out and read in just the configuration values
  • Need to save multiple source positions in TestResult. Added bad code at the point to continue.