Trained on the MotionSense dataset, this classifier uses smartphone accelerometer readings to determine whether a person was standing or sitting during a recording.
Use Sensor Logger or any accelerometer app to capture motion data while standing or sitting.
Export the CSV and drop it into the predict section below.
The model extracts 8 features and predicts standing or sitting instantly in your browser.
Dataset
The model was trained on the MotionSense dataset published by Malekzadeh et al. — accelerometer and gyroscope data collected from an iPhone 6s at 50 Hz, with 24 subjects performing various activities.
Each CSV contains 12 columns of device motion data. We use the 3 userAcceleration columns (x, y, z) for classification.
| Group | Columns | Used |
|---|---|---|
| Attitude | attitude.roll, attitude.pitch, attitude.yaw | No |
| Gravity | gravity.x, gravity.y, gravity.z | No |
| Rotation Rate | rotationRate.x, rotationRate.y, rotationRate.z | No |
| User Acceleration | userAcceleration.x, .y, .z | Yes |
4 trials from 2 activities, 24 subjects each
| Trial | Activity | Subjects | ~Duration |
|---|---|---|---|
| std_6 | Standing | 24 | ~200s |
| std_14 | Standing | 24 | ~50s |
| sit_5 | Sitting | 24 | ~200s |
| sit_13 | Sitting | 24 | ~200s |
How raw data becomes model input
1. Segment each file into 30-second windows (1500 samples) with 50% overlap
2. Extract 8 statistical features per window:
3. Classify with logistic regression
24 subjects aged 18–46, varied height and weight, 14 male / 10 female
Performance
Logistic regression evaluated with leave-one-subject-out cross-validation — the model never sees data from the test subject during training.
Predicted vs actual across all 24 CV folds
Per-class precision, recall, and F1 score
| Class | Precision | Recall | F1 |
|---|---|---|---|
| Sitting | 91.4% | 92.5% | 92.0% |
| Standing | 91.6% | 90.4% | 91.0% |
Each bar shows accuracy when that subject was held out as the test set
Absolute logistic regression coefficient — higher means more influential for the classification decision
Pipeline
The complete Python notebook used to load data, extract features, train the model, and export weights. Download and run it yourself, or read through below.
Jupyter notebook — requires Python 3, scikit-learn, pandas, numpy, matplotlib, seaborn
Predict
Upload an accelerometer CSV from Sensor Logger or any recording app. The model will extract the same 8 features and predict what you were doing — standing or sitting.
Accepts CSV with columns: userAcceleration.x/y/z, or x/y/z, or accelerometerAccelerationX/Y/Z
Learn
How phone accelerometers and simple machine learning distinguish standing from sitting.
Modern smartphones contain 3-axis accelerometers sampling at 50–100 Hz. Standing requires constant micro-adjustments for balance (postural sway), producing higher acceleration variance than the relative stillness of sitting.
MEMS Sensors — micro-electro-mechanical systems in your phone detect acceleration along three axes. Even when "still," the sensor captures subtle body movements.
Gravity Component — the MotionSense dataset separates gravity from user acceleration. We use only user acceleration, which isolates voluntary and involuntary body movements from the constant gravitational pull.
Sampling Rate — at 50 Hz, we get 1500 data points per 30-second window — more than enough to capture the frequency range of human postural sway (typically 0.1–2 Hz).
Raw acceleration is noisy. We compute 8 statistical features per 30-second window: RMS sway (overall intensity), std dev per axis (directional patterns), mean jerk (movement smoothness), path length, and sway magnitude (mean and peak).
RMS Sway — root mean square of the acceleration magnitude vector. Captures overall movement intensity regardless of direction.
Standard Deviation (X, Y, Z) — measures variability along each axis independently. Standing tends to show higher variability, especially in the lateral (X) and anterior-posterior (Y) axes.
Mean Jerk — average rate of change of acceleration. Smoother movements produce lower jerk; the constant micro-corrections of standing produce higher jerk.
Path Length — total distance traveled in acceleration space, normalized by sample count. A proxy for cumulative movement effort.
The Romberg test is a clinical exam where patients stand with eyes open then closed. Increased sway with eyes closed suggests impaired proprioception. Our posture classifier is a first step toward digitizing this assessment.
History — described by Moritz Heinrich Romberg (1795–1873) in his landmark neurology textbook. Originally used to diagnose tabes dorsalis, a complication of syphilis that destroys proprioceptive pathways.
Clinical Use — a positive Romberg sign (increased sway with eyes closed) indicates a proprioceptive or vestibular deficit. Crucially, cerebellar disorders cause ataxia in both conditions, so the test differentiates sensory from cerebellar problems.
Next Step — we are collecting eyes-open vs eyes-closed accelerometer data to train a second model that can detect impaired balance, bringing the digital Romberg test to life.
Balance relies on three sensory systems: vision, the vestibular apparatus (inner ear), and proprioception (body position sense). The brain integrates all three to maintain upright posture.
Impaired balance can result from neurological conditions, inner ear disorders, fatigue, or injury. Even closing your eyes removes visual feedback and increases postural sway in healthy individuals.
Published by Malekzadeh et al., this dataset collected iPhone 6s motion data from 24 diverse subjects (ages 18–46) across 6 activities. We use the standing and sitting trials for binary classification.
About
Romberger is an educational tool for exploring posture classification using smartphone accelerometer data. A logistic regression model was trained on the MotionSense dataset in Python (scikit-learn), and its weights are embedded directly in this page. Upload your own sensor recording and get a prediction instantly — everything runs client-side, no server needed. This is not intended for clinical use.