-
Notifications
You must be signed in to change notification settings - Fork 28
Home
This is the wiki pages for the CACHET Research Platform (CARP) Mobile Sensing (CAMS) software. An overview and access to the software libraries are listed in the main Github site.
NOTE - This documentation is aligned with the CARP Mobile Sensing version
0.40.x
libraries.
The CARP Mobile Sensing (CAMS) Flutter package carp_mobile_sensing
is a programming framework for adding digital phenotyping capabilities to your mobile (health) app.
CARP Mobile Sensing is designed to be used for collection of three main types of data:
- passive sensing from onboard sensors on the phone (e.g., light, location, etc.)
- collection of data from (wearable) devices and online services connected to the phone (e.g., heart rate, weather information, etc.)
- active collection of data from the user (e.g., surveys, EMAs, questionnaires, etc.)
These wiki pages contains an overall documentation of the main software architecture and domain model of the framework, and how to use and extend it, plus a set of appendices providing details on measure types, data formats, and data backends.
- Software Architecture – the overall picture.
- Domain Model – the detailed picture of the data model(s).
- Using CARP Mobile Sensing – how to configure passive sensing in your app.
- The AppTask Model – how to collect data from the user in your app.
- Extending CARP Mobile Sensing – how to extends the framework, with a focus on new sensing capabilities and external (wearable) devices.
- Best Practice – tips and trick, especially related to differences between Android and iOS.
Appendices
The overall goal of CAMS is to have a programming framework that helps building custom mobile sensing applications.
The basic scenario we want to support is to allow programmers to design and implement a custom mHealth app e.g. for cardiovascular diseases or diabetes. Such an app would have its main focus on providing application-specific functionality for patients with either hearth rhythm problems or diabetes, which are two rather distinct application domains.
However, the goal of a mobile sensing programming framework would be to enable the programmers to add mobile sensing capabilities in a 'flexible and simple' manner. This would include adding support for collecting data like ECG, location, activity, and step counts; to format this data according to different health data formats (like the Open mHealth formats); to use this data in the app (e.g. showing it to the user); and to upload it to a specific server, using a specific API (e.g. REST), in a specific format. The framework should be able to support different wearable devices for ECG or glucose monitoring. Hence, focus is on software engineering support in terms of a sound programming API and runtime execution environment, which is being maintained as the underlying mobile phone operating systems are evolving. Moreover, focus is on providing an extensible API and runtime environment, which allow for adding application-specific data sampling, wearable devices, data formatting, data management, and data uploading functionality.
The code listing below shows a simple Dart example of how to add sampling to a Flutter app. This basic example illustrates how sampling is configured, deployed, initialized and used in three basic steps;
- a study protocol is defined;
- the runtime environment is created and initialized, and
- sensing is started and the stream of sampling events is consumed and used in the app.
import 'package:carp_core/carp_core.dart';
import 'package:carp_mobile_sensing/carp_mobile_sensing.dart';
/// This is an example of how to set up a the most minimal study
Future<void> example() async {
// Create a study protocol
SmartphoneStudyProtocol protocol = SmartphoneStudyProtocol(
ownerId: 'AB',
name: 'Track patient movement',
);
// Define which devices are used for data collection.
// In this case, its only this smartphone
var phone = Smartphone();
protocol.addMasterDevice(phone);
// Automatically collect step count, ambient light, screen activity, and
// battery level. Sampling is delaying by 10 seconds.
protocol.addTriggeredTask(
DelayedTrigger(delay: Duration(seconds: 10)),
BackgroundTask(name: 'Sensor Task')
..addMeasures([
Measure(type: SensorSamplingPackage.PEDOMETER),
Measure(type: SensorSamplingPackage.LIGHT),
Measure(type: DeviceSamplingPackage.SCREEN),
Measure(type: DeviceSamplingPackage.BATTERY),
]),
phone);
// Create and configure a client manager for this phone, and
// create a study based on the protocol.
SmartPhoneClientManager client = SmartPhoneClientManager();
await client.configure();
Study study = await client.addStudyProtocol(protocol);
// Get the study controller and try to deploy the study.
SmartphoneDeploymentController? controller = client.getStudyRuntime(study);
await controller?.tryDeployment();
// Configure the controller.
await controller?.configure();
// Start the study
controller?.start();
// Listening and print all data events from the study
controller?.data.forEach(print);
}
In essence, the core logic of CAMS can be broken up in three independent parts:
1. Create and deploy a study protocol
This entails:
- create (or load) a
StudyProtocol
- deploy the protocol in a
DeploymentService
like the CAMS-specificSmartphoneDeploymentService
.
2. Load a study deployment for the smartphone and execute the sensing
This entails:
- creating and configuring a
ClientManager
like the CAMS-specificSmartPhoneClientManager
- create a
SmartphoneDeploymentController
for executing the deployment - if needed, load and registers external
SamplingPackage
s - if needed, load and registers external
DataEndPoint
s and correspondingDataManager
's. -
configure and resume the
SmartphoneDeploymentController
3. Use the data in the app
If the generated data is to be used in the app (e.g., shown in the interface), listen in on the data
stream
Section 3 on Using CARP Mobile Sensing contains more details on this flow. But before digging in to this, read section 2 on the Domain Model. Section 1 provide some technical details on the CAMS Architecture, but this should not be necessary to read before using CAMS. Section 5 provides details on how to extend CAMS.