Classify posture from
phone accelerometer data

Trained on the MotionSense dataset, this classifier uses smartphone accelerometer readings to determine whether a person was standing or sitting during a recording.

Try it with your data
1

Record

Use Sensor Logger or any accelerometer app to capture motion data while standing or sitting.

2

Upload

Export the CSV and drop it into the predict section below.

3

Classify

The model extracts 8 features and predicts standing or sitting instantly in your browser.

Dataset

MotionSense training data

The model was trained on the MotionSense dataset published by Malekzadeh et al. — accelerometer and gyroscope data collected from an iPhone 6s at 50 Hz, with 24 subjects performing various activities.

24
Subjects
96
CSV files loaded
50 Hz
Sampling rate
12
Sensor columns

Data structure

Each CSV contains 12 columns of device motion data. We use the 3 userAcceleration columns (x, y, z) for classification.

GroupColumnsUsed
Attitudeattitude.roll, attitude.pitch, attitude.yawNo
Gravitygravity.x, gravity.y, gravity.zNo
Rotation RaterotationRate.x, rotationRate.y, rotationRate.zNo
User AccelerationuserAcceleration.x, .y, .zYes

Activity trials used

4 trials from 2 activities, 24 subjects each

TrialActivitySubjects~Duration
std_6Standing24~200s
std_14Standing24~50s
sit_5Sitting24~200s
sit_13Sitting24~200s

Windowing & features

How raw data becomes model input

1. Segment each file into 30-second windows (1500 samples) with 50% overlap

2. Extract 8 statistical features per window:

  • RMS sway (magnitude intensity)
  • Std dev X, Y, Z (directional variability)
  • Mean jerk (movement smoothness)
  • Path length (total displacement)
  • Sway mean & peak (magnitude stats)

3. Classify with logistic regression

812
Total windows
386
Standing windows
426
Sitting windows
8
Features extracted

Subject demographics

24 subjects aged 18–46, varied height and weight, 14 male / 10 female

Performance

Model evaluation

Logistic regression evaluated with leave-one-subject-out cross-validation — the model never sees data from the test subject during training.

91.5%
Leave-one-subject-out cross-validation accuracy

Confusion matrix

Predicted vs actual across all 24 CV folds

Pred. Sitting
Pred. Standing
Actual Sitting
394
32
Actual Standing
37
349

Classification metrics

Per-class precision, recall, and F1 score

ClassPrecisionRecallF1
Sitting91.4%92.5%92.0%
Standing91.6%90.4%91.0%

Per-subject classification accuracy

Each bar shows accuracy when that subject was held out as the test set

Feature importance

Absolute logistic regression coefficient — higher means more influential for the classification decision

Pipeline

Training pipeline

The complete Python notebook used to load data, extract features, train the model, and export weights. Download and run it yourself, or read through below.

train_model.ipynb

Jupyter notebook — requires Python 3, scikit-learn, pandas, numpy, matplotlib, seaborn

Predict

Test it yourself

Upload an accelerometer CSV from Sensor Logger or any recording app. The model will extract the same 8 features and predict what you were doing — standing or sitting.

Drop your accelerometer CSV here

Accepts CSV with columns: userAcceleration.x/y/z, or x/y/z, or accelerometerAccelerationX/Y/Z

Learn

Understanding posture recognition

How phone accelerometers and simple machine learning distinguish standing from sitting.

Why accelerometers?

Modern smartphones contain 3-axis accelerometers sampling at 50–100 Hz. Standing requires constant micro-adjustments for balance (postural sway), producing higher acceleration variance than the relative stillness of sitting.

MEMS Sensors — micro-electro-mechanical systems in your phone detect acceleration along three axes. Even when "still," the sensor captures subtle body movements.

Gravity Component — the MotionSense dataset separates gravity from user acceleration. We use only user acceleration, which isolates voluntary and involuntary body movements from the constant gravitational pull.

Sampling Rate — at 50 Hz, we get 1500 data points per 30-second window — more than enough to capture the frequency range of human postural sway (typically 0.1–2 Hz).

Feature engineering

Raw acceleration is noisy. We compute 8 statistical features per 30-second window: RMS sway (overall intensity), std dev per axis (directional patterns), mean jerk (movement smoothness), path length, and sway magnitude (mean and peak).

RMS Sway — root mean square of the acceleration magnitude vector. Captures overall movement intensity regardless of direction.

Standard Deviation (X, Y, Z) — measures variability along each axis independently. Standing tends to show higher variability, especially in the lateral (X) and anterior-posterior (Y) axes.

Mean Jerk — average rate of change of acceleration. Smoother movements produce lower jerk; the constant micro-corrections of standing produce higher jerk.

Path Length — total distance traveled in acceleration space, normalized by sample count. A proxy for cumulative movement effort.

The Romberg test

The Romberg test is a clinical exam where patients stand with eyes open then closed. Increased sway with eyes closed suggests impaired proprioception. Our posture classifier is a first step toward digitizing this assessment.

History — described by Moritz Heinrich Romberg (1795–1873) in his landmark neurology textbook. Originally used to diagnose tabes dorsalis, a complication of syphilis that destroys proprioceptive pathways.

Clinical Use — a positive Romberg sign (increased sway with eyes closed) indicates a proprioceptive or vestibular deficit. Crucially, cerebellar disorders cause ataxia in both conditions, so the test differentiates sensory from cerebellar problems.

Next Step — we are collecting eyes-open vs eyes-closed accelerometer data to train a second model that can detect impaired balance, bringing the digital Romberg test to life.

What is balance?

Balance relies on three sensory systems: vision, the vestibular apparatus (inner ear), and proprioception (body position sense). The brain integrates all three to maintain upright posture.

Loss of balance

Impaired balance can result from neurological conditions, inner ear disorders, fatigue, or injury. Even closing your eyes removes visual feedback and increases postural sway in healthy individuals.

MotionSense dataset

Published by Malekzadeh et al., this dataset collected iPhone 6s motion data from 24 diverse subjects (ages 18–46) across 6 activities. We use the standing and sitting trials for binary classification.

About

What is Romberger?

Romberger is an educational tool for exploring posture classification using smartphone accelerometer data. A logistic regression model was trained on the MotionSense dataset in Python (scikit-learn), and its weights are embedded directly in this page. Upload your own sensor recording and get a prediction instantly — everything runs client-side, no server needed. This is not intended for clinical use.