Skip to content
Jakob E. Bardram edited this page Oct 31, 2022 · 116 revisions

This is the wiki pages for the CACHET Research Platform (CARP) Mobile Sensing (CAMS) software. An overview and access to the software libraries are listed in the main Github site.

NOTE - This documentation is aligned with the CARP Mobile Sensing version 0.40.x libraries.

CARP Mobile Sensing Framework in Flutter

The CARP Mobile Sensing (CAMS) Flutter package carp_mobile_sensing is a programming framework for adding digital phenotyping capabilities to your mobile (health) app. CARP Mobile Sensing is designed to be used for collection of three main types of data:

  • passive sensing from onboard sensors on the phone (e.g., light, location, etc.)
  • collection of data from (wearable) devices and online services connected to the phone (e.g., heart rate, weather information, etc.)
  • active collection of data from the user (e.g., surveys, EMAs, questionnaires, etc.)

These wiki pages contains an overall documentation of the main software architecture and domain model of the framework, and how to use and extend it, plus a set of appendices providing details on measure types, data formats, and data backends.

Table of Content

  1. Software Architecture – the overall picture.
  2. Domain Model – the detailed picture of the data model(s).
  3. Using CARP Mobile Sensing – how to configure passive sensing in your app.
  4. The AppTask Model – how to collect data from the user in your app.
  5. Extending CARP Mobile Sensing – how to extends the framework, with a focus on new sensing capabilities and external (wearable) devices.
  6. Best Practice – tips and trick, especially related to differences between Android and iOS.

Appendices

Purpose and Goals

The overall goal of CAMS is to have a programming framework that helps building custom mobile sensing applications.

The basic scenario we want to support is to allow programmers to design and implement a custom mHealth app e.g. for cardiovascular diseases or diabetes. Such an app would have its main focus on providing application-specific functionality for patients with either hearth rhythm problems or diabetes, which are two rather distinct application domains.

However, the goal of a mobile sensing programming framework would be to enable the programmers to add mobile sensing capabilities in a 'flexible and simple' manner. This would include adding support for collecting data like ECG, location, activity, and step counts; to format this data according to different health data formats (like the Open mHealth formats); to use this data in the app (e.g. showing it to the user); and to upload it to a specific server, using a specific API (e.g. REST), in a specific format. The framework should be able to support different wearable devices for ECG or glucose monitoring. Hence, focus is on software engineering support in terms of a sound programming API and runtime execution environment, which is being maintained as the underlying mobile phone operating systems are evolving. Moreover, focus is on providing an extensible API and runtime environment, which allow for adding application-specific data sampling, wearable devices, data formatting, data management, and data uploading functionality.

Initial Example

The code listing below shows a simple Dart example of how to add sampling to a Flutter app. This basic example illustrates how sampling is configured, deployed, initialized and used in three basic steps;

  1. a study protocol is defined;
  2. the runtime environment is created and initialized, and
  3. sensing is started and the stream of sampling events is consumed and used in the app.
import 'package:carp_core/carp_core.dart';
import 'package:carp_mobile_sensing/carp_mobile_sensing.dart';

/// This is an example of how to set up a the most minimal study
Future<void> example() async {
  // Create a study protocol
  SmartphoneStudyProtocol protocol = SmartphoneStudyProtocol(
    ownerId: 'AB',
    name: 'Track patient movement',
  );

  // Define which devices are used for data collection.
  // In this case, its only this smartphone
  var phone = Smartphone();
  protocol.addMasterDevice(phone);

  // Automatically collect step count, ambient light, screen activity, and
  // battery level. Sampling is delaying by 10 seconds.
  protocol.addTriggeredTask(
      DelayedTrigger(delay: Duration(seconds: 10)),
      BackgroundTask(name: 'Sensor Task')
        ..addMeasures([
          Measure(type: SensorSamplingPackage.PEDOMETER),
          Measure(type: SensorSamplingPackage.LIGHT),
          Measure(type: DeviceSamplingPackage.SCREEN),
          Measure(type: DeviceSamplingPackage.BATTERY),
        ]),
      phone);

  // Create and configure a client manager for this phone, and
  // create a study based on the protocol.
  SmartPhoneClientManager client = SmartPhoneClientManager();
  await client.configure();
  Study study = await client.addStudyProtocol(protocol);

  // Get the study controller and try to deploy the study.
  SmartphoneDeploymentController? controller = client.getStudyRuntime(study);
  await controller?.tryDeployment();

  // Configure the controller.
  await controller?.configure();

  // Start the study
  controller?.start();

  // Listening and print all data events from the study
  controller?.data.forEach(print);
}

In essence, the core logic of CAMS can be broken up in three independent parts:

1. Create and deploy a study protocol

This entails:

  1. create (or load) a StudyProtocol
  2. deploy the protocol in a DeploymentService like the CAMS-specific SmartphoneDeploymentService.

2. Load a study deployment for the smartphone and execute the sensing

This entails:

  1. creating and configuring a ClientManager like the CAMS-specific SmartPhoneClientManager
  2. create a SmartphoneDeploymentController for executing the deployment
  3. if needed, load and registers external SamplingPackages
  4. if needed, load and registers external DataEndPoints and corresponding DataManager's.
  5. configure and resume the SmartphoneDeploymentController

3. Use the data in the app

If the generated data is to be used in the app (e.g., shown in the interface), listen in on the data stream


Section 3 on Using CARP Mobile Sensing contains more details on this flow. But before digging in to this, read section 2 on the Domain Model. Section 1 provide some technical details on the CAMS Architecture, but this should not be necessary to read before using CAMS. Section 5 provides details on how to extend CAMS.

Clone this wiki locally