FailedChanges

Summary

  1. Add a classification step to the pipeline (commit: ce6cec989aa54f62474dcb225ac4dd2077ec13c8) (details)
  2. Switch everything to use inferred_section instead of cleaned_section (commit: 232628ce1372e691ae1b9b15e2cbb061aaf3ad33) (details)
  3. habitica tests use the metrics which now use inferred sections (commit: 2cd832889f3ddee09f387914d844945fa8950d30) (details)
  4. Actually fix the geojson plotter to return the correct values (commit: a4f623e7e15e53c4366e8f7c3bb70f039dd9cbdc) (details)
  5. Update habitica test to use predicted modes instead of sensed modes (commit: 2666a68ff25376ee76f8179f8e47796ad01d03bc) (details)
  6. Get the metrics tests to pass with inferred data as well (commit: fbb096e0a0dd3d682242cc313b17e64e31107c49) (details)
  7. Run the mode inference before converting geojson (commit: 3fac2cfcc71c6c35ff3f193dfdd8c17dddc433e9) (details)
  8. Switch another test to predicted mode (commit: 5aec7f9cc446aac9917c8cebd37906199e32399d) (details)
  9. Remove deprecated API from pymongo (commit: 284861c3a0921da017c3a4bfbd2bb59b104ff4bb) (details)
  10. Copy over the simple 2 result vector seed model before running the tests (commit: b390ee00a58bcd02acad891b11f4a476cec3c517) (details)
  11. Add support for deleting mode inference results (commit: 97c3ac0938bc32fef70d3b2c8e371bef15f9379a) (details)
  12. Actually add the inferred sections test (commit: 942d0c4bb493cb552d898c506bc77fe23b4fbab0) (details)
  13. Add a simple test that just copies files over (commit: b4b578e9efe8259641963fe15c51ac357a7afad9) (details)
  14. Add another real test (commit: 2d86dd2a71a4c28223a431693992cb4f939c57b1) (details)
  15. Remove validation from the setup code (commit: 68c54540a3947513418a11207f782bbd134869ff) (details)
  16. Switching back to full test suite (commit: d8e0c2c1f60b5c172961df81373d27be46deab81) (details)
  17. Fix the last tests! (commit: 52e7018dc65d5e679ec776698c5c6ca321543e2b) (details)
  18. Go back to individual tests (commit: 139eab7f0f0a15ec293676d2a7c969cc4afd3c86) (details)
  19. Remove tests that use real data from the individual inferred test (commit: e456e4799f5cee13ce1b5f93a9cc91b28059db16) (details)
  20. Restore the tests that were failing in the full list (commit: 1365601352e6d10da57656dba6138b005f1a58c5) (details)
  21. Restore running all tests (commit: 4fb702479c49b9002b5830b887dbdd14beca82b9) (details)
  22. Let's see if this is the only failing test (commit: 4474039c97e9d8efcaf05925e1a981a2d150e0a5) (details)
  23. Switch to running the individual test (commit: a400bfcd6fe76a1544395e075bb9c635aa5a98fa) (details)
  24. Added the test back and switched to running all tests (commit: b5f441a5b9bde8501d8734accbe6f8debe356e22) (details)
  25. Remove the test again because I have no idea why it is failing (commit: a27bfa4f8153c6ed5309a6fc07f1defd53b7680d) (details)
  26. Let's try to run the individual tests separately (commit: 8b3541e50975bfbaed35b2be4915781944ae8ae1) (details)
Commit ce6cec989aa54f62474dcb225ac4dd2077ec13c8 by shankari
Add a classification step to the pipeline
addressed simple issues - no further failures
``` 2018-02-22T16:12:35.936949-08:00**********UUID
495a7860-36d2-4c56-a2bb-749414434e6c: cleaning and resampling
timeline********** 2018-02-22T16:12:37.302180-08:00**********UUID
495a7860-36d2-4c56-a2bb-749414434e6c: inferring transportation
mode********** 2018-02-22T16:12:37.430458-08:00**********UUID
495a7860-36d2-4c56-a2bb-749414434e6c: checking active mode trips to
autocheck habits**********
```
Inference is generated. But it is always 'BUS' versus 'WALKING'. That
looks suspicious...
```
[{'_id': ObjectId('5a8f5c75f6858ffa4994910e'),
'data': {'predicted_mode_map': {'BUS': 0.2, 'WALKING': 0.8}}},
{'_id': ObjectId('5a8f5c75f6858ffa49949110'),
'data': {'predicted_mode_map': {'BUS': 0.2, 'WALKING': 0.8}}},
{'_id': ObjectId('5a8f5c75f6858ffa49949112'),
'data': {'predicted_mode_map': {'BUS': 0.2, 'WALKING': 0.8}}},
{'_id': ObjectId('5a8f5c75f6858ffa49949114'),
'data': {'predicted_mode_map': {'BUS': 0.2, 'WALKING': 0.8}}},
{'_id': ObjectId('5a8f5c75f6858ffa49949116'),
'data': {'predicted_mode_map': {'BUS': 0.2, 'WALKING': 0.8}}},
{'_id': ObjectId('5a8f5c75f6858ffa49949118'),
'data': {'predicted_mode_map': {'BUS': 0.2, 'WALKING': 0.8}}}]
```
Ah! The model was the simplified model used for testing. Using one of
the generated models, we get
```
[{'_id': ObjectId('5a8f5e80f6858ffad3611ea2'),
'data': {'predicted_mode_map': {'AIR_OR_HSR': 0.1,
   'BICYCLING': 0.5,
   'WALKING': 0.4}}},
{'_id': ObjectId('5a8f5e80f6858ffad3611ea4'),
'data': {'predicted_mode_map': {'WALKING': 1.0}}},
{'_id': ObjectId('5a8f5e80f6858ffad3611ea6'),
'data': {'predicted_mode_map': {'BICYCLING': 0.1,
   'CAR': 0.1,
   'WALKING': 0.8}}},
{'_id': ObjectId('5a8f5e80f6858ffad3611ea8'),
'data': {'predicted_mode_map': {'BICYCLING': 0.9, 'CAR': 0.1}}},
{'_id': ObjectId('5a8f5e80f6858ffad3611eaa'),
'data': {'predicted_mode_map': {'WALKING': 1.0}}},
{'_id': ObjectId('5a8f5e80f6858ffad3611eac'),
'data': {'predicted_mode_map': {'BICYCLING': 0.1, 'WALKING': 0.9}}}]
```
(commit: ce6cec989aa54f62474dcb225ac4dd2077ec13c8)
The file was modifiedemission/storage/pipeline_queries.py (diff)
The file was modifiedemission/pipeline/intake_stage.py (diff)
The file was modifiedemission/analysis/classification/inference/mode/pipeline.py (diff)
Commit 232628ce1372e691ae1b9b15e2cbb061aaf3ad33 by shankari
Switch everything to use inferred_section instead of cleaned_section
Fairly simple set of fixes
- covers both the geojson stuff + the metrics
- configurable via config.json so we can test both old and new
update the existing tests to use the cleaned sections need to add a new
test for the new inferred sections
(commit: 232628ce1372e691ae1b9b15e2cbb061aaf3ad33)
The file was modifiedemission/tests/analysisTests/intakeTests/TestPipelineRealData.py (diff)
The file was modifiedemission/storage/decorations/analysis_timeseries_queries.py (diff)
The file was modifiedemission/analysis/result/metrics/time_grouping.py (diff)
The file was modifiedemission/analysis/plotting/geojson/geojson_feature_converter.py (diff)
The file was modifiedemission/analysis/config.py (diff)
The file was modifiedemission/tests/analysisTests/resultTests/TestTimeGrouping.py (diff)
The file was modifiedconf/analysis/debug.conf.json.sample (diff)
Commit 2cd832889f3ddee09f387914d844945fa8950d30 by shankari
habitica tests use the metrics which now use inferred sections
So we need to pull create inferred sections for testing
(commit: 2cd832889f3ddee09f387914d844945fa8950d30)
The file was modifiedemission/tests/netTests/TestHabiticaAutocheck.py (diff)
Commit a4f623e7e15e53c4366e8f7c3bb70f039dd9cbdc by shankari
Actually fix the geojson plotter to return the correct values
The first set of changes (in 232628ce1372e691ae1b9b15e2cbb061aaf3ad33)
switch cleaned -> inferred. But that was actually in code where we get
the locations for the section.
The actual sections are retrieved as part of the cleaned timeline.
So the potential fixes are:
- we can create a new inferred timeline that combines inferred sections
with
cleaned stops. This is not strictly hard to do; see
`emission.storage.decorations.timeline.get_timeline` but then we need
to
think about whether we should also have the stops be inferred stops...
- just replace the mode (which is the only thing different between the
sections) with the mode from the corresponding inferred section. This
is easy
and it works.
(commit: a4f623e7e15e53c4366e8f7c3bb70f039dd9cbdc)
The file was modifiedemission/analysis/plotting/geojson/geojson_feature_converter.py (diff)
Commit 2666a68ff25376ee76f8179f8e47796ad01d03bc by shankari
Update habitica test to use predicted modes instead of sensed modes
Since we are inserted test sections that are inferred sections, make
their modes be predicted, not sensed modes
(commit: 2666a68ff25376ee76f8179f8e47796ad01d03bc)
The file was modifiedemission/tests/netTests/TestHabiticaAutocheck.py (diff)
Commit fbb096e0a0dd3d682242cc313b17e64e31107c49 by shankari
Get the metrics tests to pass with inferred data as well
- First, add the inference to the intake pipeline in the tests common
file
- Next, copy testing tests on cleaned data to a separate test file
- Finally, create a new test for metrics on inferred modes and update
the
expected values for the inferred modes by looking at the current
results
(commit: fbb096e0a0dd3d682242cc313b17e64e31107c49)
The file was modifiedemission/analysis/result/metrics/time_grouping.py (diff)
The file was modifiedemission/core/wrapper/modestattimesummary.py (diff)
The file was addedemission/tests/netTests/TestMetricsCleanedSections.py
The file was modifiedemission/tests/common.py (diff)
The file was removedemission/tests/netTests/TestMetrics.py
The file was modifiedemission/tests/analysisTests/plottingTests/TestGeojsonFeatureConverter.py (diff)
Commit 5aec7f9cc446aac9917c8cebd37906199e32399d by shankari
Switch another test to predicted mode
Similar to 2666a68ff25376ee76f8179f8e47796ad01d03bc for habitica
(commit: 5aec7f9cc446aac9917c8cebd37906199e32399d)
The file was modifiedemission/tests/analysisTests/resultTests/TestTimeGrouping.py (diff)
The file was modifiedemission/tests/analysisTests/plottingTests/TestGeojsonFeatureConverter.py (diff)
The file was modifiedemission/tests/analysisTests/modeinferTests/TestPipeline.py (diff)
Commit b390ee00a58bcd02acad891b11f4a476cec3c517 by shankari
Copy over the simple 2 result vector seed model before running the tests
Using a standard, albeit fake, model will:
- give predictable results
- allow full system testing
But no model will cause the inference step, and by extension, some
tests, to fail
(commit: b390ee00a58bcd02acad891b11f4a476cec3c517)
The file was modifiedemission/tests/common.py (diff)
The file was modifiedemission/tests/analysisTests/plottingTests/TestGeojsonFeatureConverter.py (diff)
Commit 97c3ac0938bc32fef70d3b2c8e371bef15f9379a by shankari
Add support for deleting mode inference results
May be worthwhile for each pipeline step to know how to cleanup after
itself let's start with this one and see how well it is used
Also:
- log any exceptions encountered during the analysis pipeline run
- Fix assert so that it prints meaningful error message
- remove pipeline results before testing the pipeline
- also copy the seed model over as part of testing
(commit: 97c3ac0938bc32fef70d3b2c8e371bef15f9379a)
The file was modifiedemission/storage/decorations/section_queries.py (diff)
The file was modifiedemission/analysis/classification/inference/mode/pipeline.py (diff)
The file was modifiedemission/tests/analysisTests/modeinferTests/TestPipeline.py (diff)
Commit 942d0c4bb493cb552d898c506bc77fe23b4fbab0 by shankari
Actually add the inferred sections test
From fbb096e0a0dd3d682242cc313b17e64e31107c49
(commit: 942d0c4bb493cb552d898c506bc77fe23b4fbab0)
The file was addedemission/tests/netTests/TestMetricsInferredSections.py
Commit b4b578e9efe8259641963fe15c51ac357a7afad9 by shankari
Add a simple test that just copies files over
Tests are failing on jenkins although they pass locally because the seed
model is not found. But I'm copying the seed model!
Let's add a very simple test that basically just copies over the seed
model so that we can see the failure (if any) in the console.
Also switch the default test run over to this simple test
(commit: b4b578e9efe8259641963fe15c51ac357a7afad9)
The file was modifiedemission/tests/common.py (diff)
The file was modifiedrunAllTests.sh (diff)
The file was addedemission/individual_tests/__init__.py
The file was addedemission/individual_tests/TestPipeline.py
Commit 2d86dd2a71a4c28223a431693992cb4f939c57b1 by shankari
Add another real test
+ move imports down to see if we can get some more logging
(commit: 2d86dd2a71a4c28223a431693992cb4f939c57b1)
The file was modifiedemission/individual_tests/TestPipeline.py (diff)
Commit 68c54540a3947513418a11207f782bbd134869ff by shankari
Remove validation from the setup code
So that passed.
https://amplab.cs.berkeley.edu/jenkins/job/e-mission-server-prb/1029/console
https://amplab.cs.berkeley.edu/jenkins/job/e-mission-server-prb/1030/console
Let's remove the validation and see if it still works
(commit: 68c54540a3947513418a11207f782bbd134869ff)
The file was modifiedemission/tests/common.py (diff)
Commit d8e0c2c1f60b5c172961df81373d27be46deab81 by shankari
Switching back to full test suite
Passed again, even with validation removed.
https://amplab.cs.berkeley.edu/jenkins//job/e-mission-server-prb/1031/
Switching back to full test suite.
(commit: d8e0c2c1f60b5c172961df81373d27be46deab81)
The file was modifiedrunAllTests.sh (diff)
Commit 52e7018dc65d5e679ec776698c5c6ca321543e2b by shankari
Fix the last tests!
- TestGeojsonFeatureConverter -> don't delete the file twice!
- Inferred sections -> needed to copy temp model and then twiddle with
results
to match those from new model
(commit: 52e7018dc65d5e679ec776698c5c6ca321543e2b)
The file was modifiedemission/tests/netTests/TestMetricsInferredSections.py (diff)
The file was modifiedemission/tests/analysisTests/plottingTests/TestGeojsonFeatureConverter.py (diff)
Commit 139eab7f0f0a15ec293676d2a7c969cc4afd3c86 by shankari
Go back to individual tests
This time, the inferred metrics, to see why they are not working
properly
(commit: 139eab7f0f0a15ec293676d2a7c969cc4afd3c86)
The file was modifiedrunAllTests.sh (diff)
The file was removedemission/individual_tests/TestPipeline.py
Commit e456e4799f5cee13ce1b5f93a9cc91b28059db16 by shankari
Remove tests that use real data from the individual inferred test
(commit: e456e4799f5cee13ce1b5f93a9cc91b28059db16)
The file was addedemission/individual_tests/TestMetricsInferredSections.py
Commit 1365601352e6d10da57656dba6138b005f1a58c5 by shankari
Restore the tests that were failing in the full list
(commit: 1365601352e6d10da57656dba6138b005f1a58c5)
The file was modifiedemission/individual_tests/TestMetricsInferredSections.py (diff)
Commit 4fb702479c49b9002b5830b887dbdd14beca82b9 by shankari
Restore running all tests
Now that running individual tests passed
https://amplab.cs.berkeley.edu/jenkins//job/e-mission-server-prb/1036/
(commit: 4fb702479c49b9002b5830b887dbdd14beca82b9)
The file was modifiedrunAllTests.sh (diff)
Commit 4474039c97e9d8efcaf05925e1a981a2d150e0a5 by shankari
Let's see if this is the only failing test
I have no idea how it passes individually but fails in a group. But it
doesn't appear to be something that is broken in the underlying code,
just some artifact of the automatic testing
(commit: 4474039c97e9d8efcaf05925e1a981a2d150e0a5)
The file was removedemission/tests/netTests/TestMetricsInferredSections.py
The file was modifiedrunAllTests.sh (diff)
Commit b5f441a5b9bde8501d8734accbe6f8debe356e22 by shankari
Added the test back and switched to running all tests
(commit: b5f441a5b9bde8501d8734accbe6f8debe356e22)
The file was addedemission/tests/netTests/TestMetricsInferredSections.py
The file was modifiedrunAllTests.sh (diff)
Commit a27bfa4f8153c6ed5309a6fc07f1defd53b7680d by shankari
Remove the test again because I have no idea why it is failing
(commit: a27bfa4f8153c6ed5309a6fc07f1defd53b7680d)
The file was removedemission/tests/netTests/TestMetricsInferredSections.py
Commit 8b3541e50975bfbaed35b2be4915781944ae8ae1 by shankari
Let's try to run the individual tests separately
But as part of the same automatic script. This should work, will ensure
that we are actually testing the new code, and will give us another data
point on this heisenbug
(commit: 8b3541e50975bfbaed35b2be4915781944ae8ae1)
The file was addedrunIndividualTests.sh