About JDPF

:: What is JDPF?

:: What is it for?

:: Our Mission

:: Previous work

:: License / Citations

:: Publications / Talks

:: People

Recent Talks

» July 8, 2007
IDAMAP 2007, Amsterdam, Netherlands
An Extensible Software Framework for Temporal Data Processing.

» November 12, 2006
AMIA 2006 Annual Symposium, Washington D.C., USA
A Framework for Temporal Data Processing and Abstractions.
Ciccarese Paolo.

What is JDPF?

The Java Data Processing Framework (JDPF) helps you in the definition, generation and execution of standard and custom data processing procedures. JDPF is developed and maintained by a small group of people working voluntarily to provide a highly customizable data processing infrastructure and a collection of reusable calculation algorithms. As our hope is to be able to evolve into a community, we strongly encourage interested people to contribute/join the team.

What is it for?

JDPF has been designed to provide a modular and extendable infrastructure for processing data. Due to these characteristics, you can use JDPF for defining your own data analysis model reusing already provided modules or developing your own solutions. Some examples of usage are:

Our Mission

JDPF mission

Our goal is to provide tools for the easy definition, generation and execution of data processing procedures. All this, assembling blocks selected from the provided library or custom-built through our framework.


JDPF is the natural evolution of the Tempo framework. Tempo was born mainly for temporal data analysis in medical applications like long term patients clinical monitoring and on-line biomedical signals analysis systems. Frequently, in such application context, large amounts of raw temporal data are available and it is necessary to resort to appropriate procedures for reducing the complexity of the analysis. In particular, it has been revealed very useful the application of the Temporal Abstractions techniques coupled, if necessary, to filtering techniques. Tempo has been our first attempt to define a highly modular, configurable and extendable architecture fostering fast deployment and software reuse of pipeline based data processing procedures.