Some thoughts about trust and awareness

I had some more thoughts about how behavior patterns emerge from the interplay between trust and awareness. I think the following may be true:

  1. Trust is a social construct to deal with incomplete information. It’s a shortcut that essentially states “based on some set of past experiences, I will assume that this (now trusted) entity will behave in a predictable, reliable, and beneficial way for me”
  2. Awareness refers to how complete the knowledge of the information domain is. Completely aware indicates complete information. Unaware indicates not only absent information but no knowledge of the domain at all.
  3. Healthy behaviors emerge when trust and awareness are equivalent.
  4. Low trust and low awareness is reasonable. It’s like walking through a dark, unknown space. You go slow, bump into things, and adjust.
  5. Low trust and high awareness is paralytic.
  6. High trust and low awareness is reckless. Runaway conditions like echo chambers.
  7. Diversity is a mechanism for extending awareness, but it depends on trusting those who are different. That may be the essence of the explore/exploit dilemma.
  8. In a healthy group context, trust falls off as a function of awareness. That’s why we get flocking. That is the pattern that emerges when you trust more those who are close, while they in turn do the same, building a web of interaction. It’s kind of like interacting ripples?
  9. This may work for any collection of entities that have varied states that undergo change in some predictable way. If they were completely random, then awareness of the state is impossible, and trust should be zero.
    1. Human agent trust chains might proceed from self to family to friends to community, etc.
    2. Machine agent trust chains might proceed from self to direct connections (thumb drives, etc) to LAN/WAN to WAN
    3. Genetic agent trust chain is short – self to species. Contact is only for reproduction. Interaction would reflect the very long sampling times.
    4. Note that (1) is evolved and is based on incremental and repeated interactions, while (2) is designed and is based on arbitrary rules that can change rapidly. Genetics are maybe dealing with different incentives? The only issue is persisting and spreading (which helps in the persisting)
  10. Computer-mediated-communication disturbs this process (as does probably every form of mass communication) because the trust in the system is applied to the trust of the content. This can work in both ways. For example, lowering trust in the press allows for claims of Fake News. Raising the trust of social networks that channel anonymous online sources allows for conspiracy thinking.
  11. An emerging risk is how this affects artificial intelligence, given that currently high trust in the algorithms and training sets is assumed by the builders
    1. Low numbers of training sets mean low diversity/awareness,
    2. Low numbers of algorithms (DNNs) also mean low diversity/awareness
    3. Since training/learning is spread by update, the installed base is essentially multiple instances of the same individual. So no diversity and very high trust. That’s a recipe for a stampede of 10,000 self driving cars.




A little more direction?

  • In meeting with Dr. Kuber, I brought up something that I’ve been thinking about since the weekend. The interface works, provably so. The pilot study shows that it can be used for (a) training and (b) “useful” work. If the goal is to produce “blue collar telecommuting”, then the question becomes, how do we actually achieve that? A dumb master-slave system makes very little sense for a few reasons:
    • Time lag. It may not be possible to always get a fast enough response loop to make haptics work well
    • Machine intelligence. With robots coming online like Baxter, there is certainly some level of autonomy that the on-site robot can perform. So, what’s a good human-robot synergy?
  • I’m thinking that a hybrid virtual/physical interface might be interesting.
    • The robotic workcell is constantly scanned and digitized by cameras. The data is then turned into models of the items that the robot is to work with.
    • These items are rendered locally to the operator, who manipulates the virtual objects using tight-loop haptics, 3D graphics, etc. Since (often?) the space is well known, the objects can be rendered from a library of CAD-correct parts.
    • The operator manipulates the virtual objects. The robot follows the “path” laid down by the operator. The position and behavior of the actual robot is represented in some way (ghost image, warning bar, etc). This is known as Mediated Teleoperation, and described nicely in this paper.
    • The novel part, at least as far as I can determine at this point is using mediated telepresence to train a robot in a task:
      • The operator can instruct the robot to learn some or all of a particular procedure. This probably entails setting entry, exit, and error conditions for tasks, which the operator is able to create on the local workstation.
      • It is reasonable to expect that in many cases, this sort of work will be a mix of manual control and automated behavior. For example, placing of a part may be manual, but screwing a bolt into place to a particular torque could be entirely automatic. If a robot’s behavior is made  fully autonomous, the operator needs simply to monitor the system for errors or non-optimal behavior. At that point, the operator could engage another robot and repeat the above process.
      • User interfaces that inform the operator when the robot is coming out of autonomous modes in a seamless way need to be explored.


With 10 subjects running two passes each through the system, I now have significant (Using one-way ANOVA) results for the Phantom setup. First, user errors:

Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 -0.3333 0.3123 -1.067 0.7110
OPEN_LOOP - HAPTIC == 0 0.5833 0.3123 1.868 0.2565
TACTOR - HAPTIC == 0 1.0000 0.3123 3.202 0.0130 *
OPEN_LOOP - HAPTIC_TACTOR == 0 0.9167 0.3123 2.935 0.0262 *
TACTOR - HAPTIC_TACTOR == 0 1.3333 0.3123 4.269 <0.001***
TACTOR - OPEN_LOOP == 0 0.4167 0.3123 1.334 0.5466
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

Next, normalized user task completion speed

Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 0.11264 0.07866 1.432 0.4825
OPEN_LOOP - HAPTIC == 0 0.24668 0.07866 3.136 0.0118 *
TACTOR - HAPTIC == 0 0.17438 0.07866 2.217 0.1255
OPEN_LOOP - HAPTIC_TACTOR == 0 0.13404 0.07866 1.704 0.3269
TACTOR - HAPTIC_TACTOR == 0 0.06174 0.07866 0.785 0.8612
TACTOR - OPEN_LOOP == 0 -0.07230 0.07866 -0.919 0.7947
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

So what this says is that HAPTIC_TACTOR has the lowest error occurrence, and that HAPTIC is the fastest in achieving the task (note – there may be some Force Feedback artifacts that contribute to this result but that will be dealt with in the next study)

This can be shown best by looking at some plots. Here’s the error results as means plots


And here are means plots for the task completion speed


Since this is a pilot study with only 10 participants, the populations are only just separating in a meaningful way, but looking at the charts it looks like HAPTIC and HAPTIC_TACTOR will probably continue to become more separate from OPEN_LOOP and TACTOR.

What does this mean?

First, and this is only implicit from the study – it is possible to attach simpler, cheaper sensors and actuators (force and vibration) to a haptic device and get good performance. Even with simple semi-physics, all users were able to grip and manipulate the balls in the scenario in such a way as to achieve the goal. Ninety percent of the users who made no errors in placing 5 balls in a goal took between 20 and 60 seconds, or between 4 and 12 seconds per ball (including moving to the ball, grasping the ball and successfully depositing the ball in a narrow goal). Not bad for less than $30 in sensors and actuators.

Second, force-feedback really makes a difference. Doing tasks in an “open loop” framework is significantly slower than doing the same task with force feedback. I doubt that this is something that users will get better at, so the question with respect to gesture-based interaction is how to compensate? As can be seen from the results, it is unlikely that tactors alone can help with this problem. What will?

Third, not every axis needs to have full force-feedback. It seems that as long as the “reference frame” is FF, then the inputs that work with respect to that frame don’t need to be as sophisticated. This does mean that low(ish) cost, high-DOF systems using hybrid technologies such as Force Feedback plus Force/Vibration may be possible. This might open up a new area of exploration in HCI.

Lastly, the issue of how multiple modalities and how they could effectively perform as assistive technologies needs to be explored with this system. There are only a limited set (4?) of ways to render positional information (visual, tactile, auditory, proprioceptive) to a user, and this configuration as it currently stands is capable of three of them. However, because of the way that the DirectX sound library is utilized to provide tactile information, it is trivial to extend the setup so that 5 channels of audio information could also be provided to the user. I imagine having four speakers placed at the four corners of a monitor, providing an audio rendering of the objects in the scene. A subwoofer channel could be used to provide additional tactile(?) information.

Once multiple modalities are set up, then the visual display can be constrained in a variety of ways. It could be blurred, intermittently frozen or blacked out. Configurations of haptic/tactile/auditory stimuli could then be tested against these scenarios to determine how they affect the completion of the task. Conversely, the user could be distracted (for example in a driving game), where it is impossible to pay extensive attention to the placement task. There are lots of opportunities.

Anyway, it’s been a good week.

The Saga Continues, and Mostly Resolves.

Continuing the ongoing saga of trying to get an application written in Visual Studio 2010 in MSVC to run on ANY OTHER WINDOWS SYSTEM than the dev system. Today, I should be finishing the update of the laptop from Vista to Win7. Maybe that will work. Sigh.

Some progress. It seems you can’t use “Global” in the way specified in the Microsoft documentation about CreateFileMapping() unless you want to run everything as admin. See StackOverflow for more details.

However now the code is crashing on initialization issues. Maybe something to do with OpenGL?

It’s definitely OpenGL. All apps that use it either crash or refuse to draw.

Fixed. I needed to remove the drivers and install NVIDIA’s (earlier) versions. I’m not getting the debug text overlay, which is odd, but everything else is working. Sheesh. I may re-install the newest drivers since I now have a workable state that I know I can reach, but I think it’s time to do something else than wait for the laptop to go through another install/reboot cycle.

Started writing haptic paper. Targets are CHI, UIST, or HRI. Maybe even MIG? This is now a very different paper from the Motion Feedback paper from last year, and I’m not sure what the best way to present the information is. The novel invention part is the combinations of a simple (i.e. 3-DOF) haptic device with an N-DOF force-based device attached. The data shows that this combination has much lower error rates and faster task completion times than other configurations (tactor only and open loop), and the same times for a purely haptic system. Not sure how to organize this yet….

This is also pretty interesting… Either for iRevolution or ArTangibleSim

The unbearable non-standardness of Windows

I have been trying to take the Phantom setup on the road for about two weeks now. It’s difficult because the Phantom uses FireWire (IEE 1394) and it’s hard to find something small and portable that supports that.

My first attempt was to use my Mac Mini. Small. Cheap(ish). Ports galore. Using Bootcamp, I installed a copy of Windows Pro 7. That went well, but when I tried to use the Phantom, the system would hang when attempting to send forces. Reading joint angles was OK though.

I then tried My new Windows 8 laptop, which has an extension slot. The shared memory wouldn’t even run there. Access to the shared space appears not to be allowed.

The next step was to try an old development laptop that had a Vista install on it. The Phantom ran fine, but the shared memory communication caused the graphics application to crash. So I migrated the Windows 7 install from the Mac to the laptop, where I’m currently putting all the pieces back together.

It’s odd. It used to be that if you wrote code on one Windows platform that it would run on all windows platforms. Those days seem long gone. It looks like I can get around this problem if I change my communication scheme to sockets or something similar, but I hate that. Shared memory is fast and clean.

Slow. Painful. Progress. But at least it gives me some time to do writing…


Looks like we got some results with the headset system. Still trying to figure out what it means (other than the obvious that it’s easier to find the source of a single sound).HeadsetPrelimResults

Here are the confidence intervals:


Next I try to do something with the Phantom results. I think I may need some more data before anything shakes out.

Moving beyond PoC

Switched out the old, glued together stack of sensors for a set of c-section parts that allow pressure on the sensor to be independent of the speaker. They keep falling off though.

Trying now with more glue and cure time. I also need to get some double-stick tape.

More glue worked!

Modified the code so that multiple targets can exist and experimented with turning forces off.