Step detection

From University of Washington - Ubicomp Research Page
Revision as of 00:08, 7 June 2008 by Ryan Libby (talk | contribs) (Output: emphasize lines terminated in carriage returns)

Jump to: navigation, search

Running

To run the executable these are the two arguments used so far:

Suggested to give full path for both

-xml (the xml file with the instantiation of the Mills)

-uwarin (the uwar file to used for computation)

<bash>./inference -xml path/xml_file.xml -uwarin path/log_file***.uwar </bash>

Output

The output of inference is a comma-separated list of "features." If the output is sent over bluetooth (i.e. the -bt or -bt2 n option is set), then it contains descriptors for the features. The bt2 format has the following form:

((#DESC#descriptor(,descriptor)*|#DATA#value(,value)*)\r)*

Be sure to note that lines are terminated with a carriage return (ASCII 0xd).

So for example:

#DATA#12345,1,2,3
#DATA#12346,1,2,3
#DESC#TIME,steps_05s,stridelength_mean,stridelength_stddev
#DATA#12347,1,2,3
#DATA#12348,1,2,3

The number and order of the elements comes directly from the inference XML description. It won't change any more often than the XML file.

The time is milliseconds since the epoch, but bear in mind that the IMote's clock will be wrong unless you set it. It's more likely to be simply uptime.

Algorithm

The algorithm is simple in principle. It seeks to partition the input on (somewhat arbitrary) step boundaries and then find the maximum of the signal in each partition. At a high level, it does roughly this:

  1. Take the magnitude of the 3d accelerometer vector.
  2. Smooth the magnitude.
  3. Take the derivative of the smoothed magnitude.
  4. Look for peaks in the smoothed magnitude between ascending zero-crossings of the derivative.

Step 1 is straightforward. Rather than looking at any component of the 3D acceleration, the algorithm considers only the magnitude of the vector. This means no attempt is made to determine the orientation of the device and there is no dependence on any particular orientation.

Step 2 is somewhat more complicated and has been done in two different ways. The problem is that the magnitude data are not smooth at the granularity in which we're interested. There are high-frequency effects from effectively superimposing the three acceleration components. There are also other high-frequency, low-amplitude elements of the signal which are presumed not to be important to partitioning the data; they may be noise or simply some other uncorrelated motion. The goal then is to separate the periodic characteristics due to bipedal motion from those not.

The first approach taken to the smoothing problem came from a pair of assumptions: that the interesting signal was at about 1-4 hz, and that most uninteresting features of the signal were low-amplitude. The smoother first put the signal through a simple low-pass filter: the mean over a sliding window. Then step 3 was run to find local extrema. If the absolute value from one extremum to the next was small, THIS IS PAINFUL COME BACK TO IT

The second (and current) approach took the view that the signall was at some low (but unknown) frequency, and that the uninteresting parts of the signal were the higher-frequency components. It addressed this directly with a filter with a frequency-based cutoff. The cutoff, however, is dynamic.

The FFT of the signal over a window is taken. The total energy (less the DC value) is then determined. A theshold portion of that energy is picked as likely containing most of the interesting parts of the signal. The fewest number of the lowest frequency bins are then included that contain at least that much energy. The inverse FFT is then taken.

This works well over the window, but the dynamic choice of the energy cut-off means that a different filter is applied to each window. This leaves a problem of smoothly stitching together the inverse FFTs. The applied solution is to slide the window by half the window length, then combine the outputs with a weighted average which gives more weight toward the center of the window.

Step 3 is performed to locate local extrema in the smoothed signal. The minima are presumed to be mid-step.

Finally, peak is the maximum of the signal between two mid-step boundaries.

Mills

There are many mills under the source directory The algorithm is using the following Mills in the order given:


MeanStdDevMill(outputs a smoothed magnitude)

LtiFilterMill (outputs derivative of the smoothed magnitude)

ZeroCrossingMill (outputs the zero-crossings of the smoothed magnitude using the derivative of it)

AccelIntervalsMill (outputs areas between zero-crossings)

WinThresholdMill (makes absolute thresholding and relative thresholding, given a window length )

WinMax (outputs the minimun and maximum for every window)

Source

CVS checkout / export

To check out the latest code: <bash>cvs -d user@bicycle.cs.washington.edu:/projects/ubicomp/uwar/CVS checkout msp_peak_detection</bash>

CVS tags

We're trying to keep our branch of msp reasonably in-sync with Intel Research Seattle's. That means we periodically merge in updates from the sourceforge repository maintained by Intel.

  • msp_peak_detection
  • msp_2008_02_05
  • pre-merge-1
  • merge-1
  • merge-2
  • pre-merge-3
  • merge-3

Building

msp_peak_detection is built the same way as other msp branches. <bash>cd msp_peak_detection/src ./configure # unless you're cross-compiling make cd inference make</bash> If you are cross-compiling, you'll first need to set the environment variable CROSS_COMPILE to the path to your cross-compiler and cross-binutils.

Old Matlab code