Recently in mobile and wearable computing, we have seen a rise in continuous sensing across different types of sensors and in device processing. Combination of these array of sensors and corresponding inference have capability to generate detailed non-trivial multidimensional information about the environment and the individual carrying the device. Also, there is a growing number of third-party applications that use these information for context-aware-computing, personal well-being, and just-in-time health care.
In this paper, we propose SAINT, a scalable sensing and real-time sensing and inference toolkit for mobile devices. SAINT uses a client-server architecture. The server can support clients with different sensing (e.g., accelerometer and location) and inference (e.g., human speech) streams on-demand. If more than one client ask for the same sensing or inference data then the server performs necessary computation (e.g., recognizing human speech) once and serves all the clients needing such data with inter process communication. Furthermore, internally SAINT server enables easy sharing of sensing and inference data among different recognition modules (e.g., sleep sensing can use movement and speech inference data) over a simple middle-man $bus$ like structure. This bus architecture makes any sensing or inference streams available for consumption for any other inference module. This sharing enables complex inference possible by reusing components that has been built before. We show that bus and client server architecture incur low overhead while enabling exponential increase in cpu time and power usage on the number of clients. Also, developers found SAINT framework easy to program and deploy.