First Draft?

I think I’ve made enough progress with the coding to have something useful. And everything is using the AngularJS framework, so I’m pretty buzzword compliant. Well, that may be to bold a statement, but at least it can make data that can be analyzed by something a bit more rigorous than just looking at it in a spreadsheet.

Here the current state of things:
The data analysis app: http://philfeldman.com/iRevApps/irevdb.html

Changes:
  • Improved the UX on both apps. The main app first.
    • You can now look at other poster’s posts without ‘logging in’. You have to type the passphrase to add anything though.
    • It’s now possible to search through the posts by search term and date
    • The code is now more modular and maintainable (I know, not that academic, but it makes me happy)
    • Twitter and Facebook crossposting are coming.
  • For the db app
    • Dropdown selection of common queries
    • Tab selection of query output
    • Rule-based parsing (currently keyDownUp, keyDownDown and word
    • Excel-ready cvs output for all rules
    • WEKA ready output for keyDownUpkeyDownDown and word. A caveat on this. The WEKA ARFF format wants to have all session information in a single row. This has two ramifications:
      • There has to be a column for every key/word, including the misspellings. For the training task it’s not so bad, but for the free form text it means there are going to be a lot of columns. WEKA has a marker ‘?’ for missing data, so I’m going to start with that, but it may be that the data will have to be ‘cleaned’ by deleting uncommon words.
      • Since there is only one column per key/word, keys and words that are typed multiple times have to be grouped somehow. Right now I’m averaging, but that looses a lot of information. I may add a standard deviation measure, but that will mean double the columns. Something to ponder.

Lastly, Larry Sanger (co-founder of Wikipedia) has started a wiki-ish news site. It’s possible that I could piggyback on this effort, or at least use some of their ideas/code. It’s called infobitt.com. There’s a good manifesto here.

Normally, I would be able to start analyzing data now, with WEKA and SPSS (which I bought/leased about a week ago), but my home dev computer died and I’m waiting for a replacement right now. Frustrating.

Cluster Analysis in Mathematica

UMBC appears to have a Wolfram Pro account and student copies of Mathematica, covered by tuition, it seems. I need to do cluster analysis on words, trigraphs and digraphs. This seems to be a serious win. One option is to use the heavy client. This page seems to cover that.

I wonder if I can use Alpha Pro as a service for an analysis page though. That could be very cool. It certainly seems like a possibility. More as this progresses…

Notes:

  • R Commander Two-way Analysis of Variance Model – https://www.youtube.com/watch?v=uSI1CIHEZcc
  • Success! In that I was able to read in a file (Insert->File Path…), then click Import under the line. Boy, that’s intuitive…
  • ANOVA (yes, all caps) runs like this: ANOVA[myModel, {myFactor1, myFactor2, All}, {myFactor1, myFactor2}]

Trigraphs and digraphs and milliseconds oh my!

I’ve been reading my papers on recognizing biometrics from keystroke info and it generally seems to either work from training neural nets or from examining the timing of certain letter patterns, particularly digraphs and trigraphs. I’m currently working on parsing the raw data into more manageable data that can be stored in a form specific table:

CREATE TABLE IF NOT EXISTS `trigraph_table` (
  `uid` int(11) NOT NULL AUTO_INCREMENT,
  `session_id` varchar(255) NOT NULL,
  `word` varchar(255) NOT NULL,
  `milliseconds` int(11) NOT NULL,
  PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;

I can point back to the session_table for all the associated data if there is not sufficient clustering with just the session_id. Speaking of the session_table, it’s getting downright scary:

CREATE TABLE IF NOT EXISTS `session_table` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`session_id` varchar(255) NOT NULL,
`type` int(11) NOT NULL,
`entry_time` datetime NOT NULL,
`ip_address` varchar(255) NOT NULL,
`browser` varchar(255) NOT NULL,
`referrer` varchar(255) NOT NULL,
`submitted_text` text NOT NULL,
`raw` MEDIUMTEXT NOT NULL,
`parent_session_id` varchar(255) DEFAULT NULL,
`veracity` int(11) NOT NULL,
`hostname` varchar(255) NOT NULL,
`city` varchar(255) NOT NULL,
`region` varchar(255) NOT NULL,
`country` varchar(255) NOT NULL,
`latlong` varchar(255) NOT NULL,
`service_provider` varchar(255) NOT NULL,
`postal` varchar(255) NOT NULL,
  PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;

The idea that we are anonymous is just silly, from the moment we connect to a server, we’re basically exposed. And I’m just getting started with gathering data. Imagine what experienced web developers can do, particularly once cookies are enabled. Minimising that damage to someone who posts to the site is probably on the list of things to look at. Maybe server hardening? Certainly no links from the page.

On a different note, I’ve been thinking a bit about how confident we need to be of a source before we can start determining information trustworthiness. It seems to me that if we can show that the information comes from a wide number of individuals, we get information as well, even if we can’t sufficiently distinguish one. That trips up the one individual reporting something unique scoop aspect though.

Another thing that’s kind of interesting is typos and spell check. Keeping track of typos is easy – just run everything through a dictionary. A different, also potentially useful thing is to look at the differences in words produced by keystrokes and the submitted text. Where those differ, some sort of spell correction was used, which doesn’t behave like a paste, or it would be trapped. Anyway, it’s another form of interesting data.

 

Plans coming together

Ok, things are getting close. I have all the code pieces talking in a single application (http://philfeldman.com/irev1.html). After playing around with the ways that the key down/up events can be trapped, I decided to do as little processing as possible and simply record the keycode, time and status (up/down). The main reason for this is that things like the shift key are pressed while other keys are typed and then released. This way it’s easier to see that happen.

I also needed to prevent pasting, since that makes everything more complex (recognizing paste events, working around them, etc). It turns out that YUI dosn’t seem to handle the paste event, so you have to get it from the document directly:

var pasteTrap = document.getElementById('submittedTextInput');
pasteTrap.onpaste = function(e){
    alert("Paste is not allowed");
    return false;
}

As usual, we have the fine contributors to StackOverflow to point the way to do this

Amazingly, it even works in all browsers.

Next is cleanup, putting all the pieces into modules where they belong and doing some better css. I think something like secret might be pretty easy to put together. Colored backgrounds before coding up picture loading. But along those lines.

Last thing for the day is to finish the next pass at the IRB submission.

Safe(er) Data and Nonexistent Functions

If you want to reduce the likelihood of a SQL injection attack, use, precompiled queries. Nice in theory, tougher in practice. The nub of the problem appears to be the way that PHP binds data to execute the insert or the pull. With a nice, vulnerable query you can use string manipulation functions and as such make nice, general functions. However, if you’re mean, you can add something like “;DROP TABLE students; and poof, the table students is gone. Now, there should be a nice call that returns everything as an associative array, but that doesn’t seem to be reliable across PHP installations, so we need to work with the much more restrictive fetch();

Things to remember:

  • Everything has to happen when the statement is available, between prepare() and close().
  • Use bind_params(String datatypes…) to send data and bind_results for returning data. bind_params is less picky – you can access elements of an array directly. For bind_results you have to have individual variables declared.
  • When things go wrong in the PHP mysql code, it is likely that an HTML table will be returned. That will need to be handled.
  • Stringify and parse of objects into and out of JSON may or may not handle hierarchies. Watch what goes on in the debugger.

Anyway that just about doubled the line count in the middleware and bound the PHP code much more tightly to the form of the database. That being said, this is intended to have some production values in it anyway, so that may be a good thing. The new and improved results are in the same old place, namely io2.html. Next comes the integration of all that DB work, the recognizer part, and the panel part.

Basic Chores

Not much to write about, but some good work got squeezed in today. First, I was able to transition over to mysqli, which turned out to be nearly painless. I’ve been working on a thin layer that’s admittedly got some security holes, but that’s not what I’m trying to work through and the data’s junk anyway.

So to get use out of all this stuff, I need to have everything run on a server. I use Dreamhost, who I like a lot and have been with for years, and they give you PHP and mysql out of the box. So today was the day to try and take all the pieces that I have gotten working on my dev machine and migrate them to a place that people can access.  It did mean getting familiar with SSH and PuTTY all over again though.

The first step was creating a database. Since I’m on a shared server, that’s not as simple as when you own the instance, but Dreamhost has a dashboard that makes this pretty reasonable. It does take time though for everything to trickle through though. Once it was up and running I created a new copy of the same old table I’ve been using for my tests and populated it with the same old data.

Once that was done I fired up WinSCP and copied the files over, changed the config file and tried running the php script on the command line. Imagine my surprise when everything ran right the first time. And then compound that again when the web page worked as well. And both of those files had no changes. Repeat after me:

“Configuration files are wonderful”

“Relative addressing is also”

Anyway, here it is in all its glory: io2.html.

The next part is handling the submission of data to the db, which is making me a bit nervous about sql injection. I may just use the YUI Escape object to modify the string so it isn’t dangerous. Nope, that won’t work, but we can use blobs. Here’s how (from here):

/**
 * update the files table with the new blob from the file specified
 * by the filepath
 * @param int $id
 * @param string $filePath
 * @param string $mime
 * @return boolean
 */
function updateBlob($id,$filePath,$mime) {
$blob = fopen($filePath,'rb');

$sql = "UPDATE files
SET mime = :mime,
data = :data
WHERE id = :id";

$stmt = $this->conn->prepare($sql);

$stmt->bindParam(':mime',$mime);
$stmt->bindParam(':data',$blob,PDO::PARAM_LOB);
$stmt->bindParam(':id',$id);

return $stmt->execute();

}

On a related note, I wonder how many of our actions can be stereotyped in a way that can be detected in the browser?