This is a preview release. The API may change.
DataFire is an open source integration framework. It is built on top of open standards such as RSS and Open API, and can be run locally, on AWS Lambda, on the Serverless framework, or on DataFire.io
DataFire natively supports over 250 public APIs including:
• Slack • GitHub • Twilio • Trello • Spotify • Instagram • Gmail • Google Analytics • YouTube •
as well as common databases, RSS feeds, and custom integrations.
Be sure to install DataFire both globally and as a project dependency.
npm install -g datafire@beta
npm install --save datafire@beta
- [Quickstart](examples/0. quickstart)
- News Headlines - Send yourself a daily e-mail with headlines from NPR, CNN, and NYTimes
- Listen to This - Create a Spotify playlist from tracks posted to Reddit's r/listentothis
- GitHub to Trello - Create Trello cards for every issue in your repo
- Heroku Crash Alerts - Get a Slack message when a Heroku process crashes
- [Authentication](examples/1. authentication)
- [Error Handling](examples/2. error_handling)
- [Pagination](examples/3. pagination)
- [Data Diffing](examples/4. data_diffing)
See Flows.md for the full documentation
Flows allow you to make a series of calls to different APIs and services. You can synchronize, transfer, and react to data, no matter where it's stored.
You can view this flow in the [examples directory](./examples/0. quickstart).
This quick tutorial will fetch stories from Hacker News, get the details for the top story, then store the results to a local file.
First, let's create a new folder and add the Hacker News integration:
mkdir hacker_news_flow
cd hacker_news_flow
datafire integrate hacker_news
Now we can create a Flow. Edit ./getTopStory.js
:
const datafire = require('datafire');
const fs = require('fs');
const hackerNews = datafire.Integration.new('hacker_news');
const flow = module.exports =
new datafire.Flow('Top HN Story', 'Copies the top HN story to a local file');
flow
.step('stories', {
do: hackerNews.getStories(),
params: {storyType: 'top'},
})
.step('story_details', {
do: hackerNews.getItem(),
params: data => {
return {itemID: data.stories[0]}
}
})
.step('write_file', {
do: data => {
fs.writeFileSync('./story.json', JSON.stringify(data.story_details, null, 2));
}
});
Now let's run it:
datafire run -f ./getTopStory.js
You should see story.json
in your current directory.
Run
datafire --help
ordatafire <command> --help
for more info
datafire list -a # View all available integrations
datafire list # View installed integrations
datafire integrate gmail # Add integrations by name (or a substring)
datafire describe gmail # Show info and operations
datafire describe gmail -o gmail.users.messages.list # Show operation details
datafire describe gmail -o "GET /{userId}/messages" # Alternative operation name
datafire authenticate gmail # Store credentials for later use
# Make a test call to the API
datafire call github -o "GET /users"
# Use stored credentials with --as
datafire call github -o "GET /user" --as account_alias
# Pass parameters with --params.foo
datafire call github -o "GET /users/{username}" --params.username karpathy
# Run a flow
datafire run ./getMessages.js
See Integrations.md for the full documentation
You can add new integrations automatically from an OpenAPI specification or RSS feed. There is also experimental support for writing custom integrations.
See Authentication.md for the full documentation
DataFire can store authentication details for each integration, and multiple accounts can be created for a given integration. Support for basic authentication (username/password), API keys, and OAuth 2.0 is built-in.
See RunningFlows.md for the full documentation
Once you've written a flow, you have a number of options for running it:
- Manually on the command line
- On a schedule with cron
- On AWS Lambda
- Inside a Serverless project
- On DataFire.io
Lamdba, Serverless, and DataFire all offer ways to run your flow either on a schedule or in response to HTTP requests (webhooks).