There are several mechanisms for monitoring (and troubleshooting) SFM.
Monitor page
To reach the monitoring page, click “Monitor” on the header of any page in SFM UI.
The monitor page provides status and queue lengths for SFM components, including
harvesters and exporters.
The status is based on the most recent status reported back by each harvester
or exporter (within the last 3 days). A harvester or exporter reports its status
when it begins a harvest or export. It also reports its status when it completes
the harvest or exporter. Harvesters will also provide status updates periodically
during a harvest.
Note that if there are multiple instances of a harvester or exporter (created with
docker-compose scale), each instance will be listed.
The queue length lists the number of harvest or export requests that are waiting.
A long queue length can indicate that additional harvesters or exporters are needed
to handle the load (see Scaling up with Docker) or that there is a problem with the
harvester or exporter.
The queue length for SFM UI is also listed. This is a queue of status update messages
from harvesters or exporters. SFM UI uses these messages to update the
records for harvests and exports. Any sort of a queue here indicates a problem.
Logs
It can be helpful to peek at the logs to get more detail on the work being performed
by a harvester or exporter.
Docker logs
The logs for harvesters and exporters can be accessed using Docker’s log commands.
First, determine the name of the harvester or exporter using docker ps
. In general,
the name will be something like sfm_twitterrestharvester_1.
Second, get the log with docker logs <name>
.
Add -f to follow the log. For example,
docker logs -f sfm_twitterrestharvester_1
.
Add –tail=<number of lines to get the tail of the log. For example,
docker logs --tail=100 sfm_twitterrestharvester_1
.
Side note: To follow the logs of all services, use docker-compose logs -f
.
Social Feed Manager (SFM)¶
Social Feed Manager is open source software for libraries, archives, cultural heritage institutions and research organizations. It empowers those communities’ researchers, faculty, students, and archivists to define and create collections of data from social media platforms. Social Feed Manager will harvest from Twitter, Tumblr, Flickr, and Sina Weibo and is extensible for other platforms.
This site provides documentation for installation and usage of SFM. See the Social Feed Manager project site for full information about the project’s objectives, roadmap, and updates.
User Guide¶
Welcome to Social Feed Manager!
Social Feed Manager (SFM) is an open-source tool designed for researchers, archivists, and curious individuals to collect social media data from Twitter, Tumblr, Flickr, or Sina Weibo. See the SFM Overview for a quick look at SFM.
If you want to learn more about what SFM can do, read What is SFM used for? This guide is for users who have access to SFM and want to learn how to collect. If you’re an administrator setting up SFM for your institution, see admin-documentation.
You can always come back to this user guide for help by clicking Documentation at the bottom of any SFM page and selecting User Guide.
What is SFM used for?¶
Social Feed Manager (SFM) collects individual posts–tweets, photos, blogs–from social media sites. These posts are collected in their native, raw data format called JSON and can be exported in many formats, including spreadsheets. Users can then use this collected data for research, analysis or archiving.
Note that SFM currently collects social media data from Twitter, Tumblr, Flickr, and Sina Weibo.
Here’s a sample of what a collection set looks like:
Types of Collections¶
How to use the data¶
Privacy and platform policy considerations¶
Collecting and using data from social media platforms is subject to those platforms’ terms (Twitter, Flickr, Sina Weibo, Tumblr), as you agreed to them when you created your social media account. Social Feed Manager respects those platforms’ terms as an application (Twitter, Flickr, Sina Weibo, Tumblr).
Social Feed Manager provides data to you for your research and academic use. Social media platforms’ terms of service generally do not allow republishing of full datasets, and you should refer to their terms to understand what you may share. Authors typically retain rights and ownership to their content.
Ethical considerations¶
In addition to respecting the platforms’ terms, as a user of Social Feed Manager and data collected within it, it is your responsibility to consider the ethical aspects of collecting and using social media data. Your discipline or professional organization may offer guidance. In addition, take a look at these social media research ethical and privacy guidelines.
Setting up Credentials¶
Before you can start collecting, you need credentials for the social media platform that you want to use. Credentials are keys used by each platform to control the data they release to you.
You are responsible for creating your own credentials so that you can control your own collection rate and make sure that you are following the policies of each platform.
For more information about platform-specific policies, consult the documentation for each social media platform’s API.
Creating Collections¶
Collections are the basic SFM containers for social media data. Each collection either gathers posts from individual accounts or gathers posts based on search criteria.
Collections are contained in collection sets. While collection sets sometimes only include one collection, sets can be used to organize all of the data from a single project or archive–for example, a collection set about a band might include a collection of the Twitter user timelines of each band member, a collection of the band’s Flickr, and a Twitter Filter collection of tweets that use the band’s hashtag.
Before you begin collecting, you may want to consider these collection development guidelines.
Setting up Collections and Collection Sets¶
Because collections are housed in collection sets, you must make a collection set first.
Navigate to the Collection Sets page from the top menu, then click the Add Collection Set button.
Give the collection set a unique name and description. A collection set is like a folder for all collections in a project.
If you are part of a group project, you can contact your SFM administrator and set up a new group which you can share each collection set with. (This can be changed or added later on).
Once you are in a collection set, click the “Add Collection” dropdown menu and select the collection type you want to add.
Enter a unique collection name and a short description. The description is a great location to describe how you chose what to put in your collection.
Select which credential you want to use. If you need to set up new credentials, see Setting up Credentials.
Adding Seeds¶
Seeds are the criteria used by SFM to collect social media posts. Seeds may be individual social media accounts or search terms used to filter posts.
The basic process for adding seeds is the same for every collection type, except for Twitter Sample and Sina Weibo:
For details on each collection type, see:
Exporting your Data¶
In order to access the data in a collection, you will need to export it. You are able to download your data in several formats, including Excel (.xlsx) and Comma Separated Values (.csv), which can be loaded into a spreadsheet or data analytic software.
API Credentials¶
Accessing the APIs of social media platforms requires credentials for authentication (also knows as API keys). Social Feed Manager supports managing those credentials.
Credentials/authentication allow a user to collect data through a platform’s API. For some social media platforms (e.g., Twitter and Tumblr), Limits are placed on methods and rate of collection on a per credential basis.
SFM users are responsible for creating their own new credentials so that they can control their own collection rates and can ensure that they are following each platform’s API policies.
Most API credentials have two parts: an application credential and a user credential.(Flickr is the exception – only an application credential is necessary.)
For more information about platform-specific policies, consult the documentation for each social media platform’s API.
Managing credentials¶
SFM supports two approaches to managing credentials: adding credentials and connecting credentials. Both of these options are available from the Credentials page.
Adding credentials¶
For this approach, a user gets the application and/or user credential from the social media platform and provides them to SFM by completing a form. More information on getting credentials is below.
Connecting credentials¶
This is the easiest approach for users.
For this approach, SFM is configured with the application credentials for the social media platform by the systems administrator. The user credentials are obtained by the user being redirected to the social media website to give permission to SFM to access her account.
SFM is configured with the application credentials in the
.env
. If additional management is necessary, it can be performed using the Social Accounts section of the Admin interface.Platform specifics¶
Twitter credentials can be obtained from the Twitter API.
For detailed instructions, see Adding Twitter Credentials.
You must provide a callback URL which is http://<SFM hostname>/accounts/twitter/login/callback/. Note that this should be http not https even if you are using https.
Also, turn on Enable Callback Locking and Allow this application to be used to Sign in with Twitter.
It is recommended to change the application permissions to read-only.
Flickr credentials can be obtained from the Flickr API.
For detailed instructions, see Adding Flickr Credentials.
Tumblr credentials can be obtained from the Tumblr API.
For detailed instructions, see Adding Tumblr Credentials.
Weibo¶
For instructions on obtaining Weibo credentials, see this guide.
To use the connecting credentials approach for Weibo, the redirect URL must match the application’s actual URL and use port 80.
Adding Twitter Credentials¶
The easiest way to set up Twitter credentials is to connect them to your personal Twitter account (or another Twitter account you control). If you want more fine-tuned control, you can manually set up application-level credentials (see below).
To connect to Twitter credentials, first sign in to Twitter with the account you want to use. Then, on the Credentials page, click Connect to Twitter. A window will pop up from Twitter, asking you for authorization. Click authorize, and your credentials will automatically connect.
Once credentials are connected, you can start Creating Collections.
Manually adding Twitter Credentials, rather than connecting them automatically using your Twitter account (see above), gives you greater control over your credentials and allows you to use multiple credentials.
Navigate to https://apps.twitter.com/.
Sign in to Twitter and select “Create New App.”
Enter a name for the app like Social Feed Manager or the name of a new Collection Set.
Enter a description. You may copy and paste: This is a social media research and archival tool, which collects data for academic researchers through an accessible user interface.
Enter a Website such as the SFM url. Any website will work.
Review and agree to the Twitter Developer Agreement and click Create your Twitter Application.
Go to the Credentials page of SFM, and click Add Twitter Credential.
fields in SFM: Access Token and Access Token Secret.
Click Save
Adding Flickr Credentials¶
Adding Tumblr Credentials¶
Adding Weibo Credentials¶
For instructions on obtaining Weibo credentials, see this guide.
To use the connecting credentials approach for Weibo, the redirect URL must match the application’s actual URL and use port 80.
Collection types¶
Each collection type connects to one of a social media platform’s APIs, or methods for retrieving data. Understanding what each collection type provides is important to ensure you collect what you need and are aware of any limitations. Reading the social media platform’s documentation provides further important details.
Twitter user timeline¶
Twitter user timeline collections collect the 3,200 most recent tweets from each of a list of Twitter accounts using Twitter’s user_timeline API.
Seeds for Twitter user timelines are individual Twitter accounts.
To identify a user timeline, you can provide a screen name (the string after @, like NASA for @NASA) or Twitter user ID (a numeric string which never changes, like 11348282 for @NASA). If you provide one identifier, the other will be looked up and displayed in SFM the first time the harvester runs. The user may change the screen name over time, and the seed will be updated accordingly.
The harvest schedule should depend on how prolific the Twitter users are. In general, the more frequent the tweeter, the more frequent you’ll want to schedule harvests.
SFM will notify you when incorrect or private user timeline seeds are requested; all other valid seeds will be collected.
See Incremental collecting to decide whether or not to collect incrementally.
Twitter search¶
Twitter searches collect tweets from the last 7-9 days that match search queries, similar to a regular search done on Twitter, using the Twitter Search API. This is not a complete search of all tweets; results are limited both by time and arbitrary relevance (determined by Twitter).
Search queries must follow standard search term formulation; permitted queries are listed in the documentation for the Twitter Search API, or you can construct a query using the Twitter Advanced Search query builder.
Broad Twitter searches may take longer to complete – possibly days – due to Twitter’s rate limits and the amount of data available from the Search API. In choosing a schedule, make sure that there is enough time between searches. (If there is not enough time between searches, later harvests will be skipped until earlier harvests complete.) In some cases, you may only want to run the search once and then turn off the collection.
See Incremental collecting to decide whether or not to collect incrementally.
Twitter sample¶
Twitter samples are a random collection of approximately 0.5–1% of public tweets, using the Twitter sample stream, useful for capturing a sample of what people are talking about on Twitter. The Twitter sample stream returns approximately 0.5-1% of public tweets, which is approximately 3GB a day (compressed).
Unlike other Twitter collections, there are no seeds for a Twitter sample.
When on, the sample returns data every 30 minutes.
Only one sample or Twitter filter can be run at a time per credential.
Twitter filter¶
Twitter Filter collections harvest a live selection of public tweets from criteria matching keywords, locations, or users, based on the Twitter filter streaming API. Because tweets are collected live, tweets from the past are not included. (Use a Twitter search collection to find tweets from the recent past.)
There are three different filter queries supported by SFM: track, follow, and location.
Track collects tweets based on a keyword search. A space between words is treated as ‘AND’ and a comma is treated as ‘OR’. Note that exact phrase matching is not supported. See the track parameter documentation for more information.
,
character. When typing in certain languages that use a non-Roman alphabet, a different character is generated for commas. For example, when typing in languages such as Arabic, Farsi, Urdu, etc., typing a comma generates the،
character. To avoid errors, the Track parameter should use the Roman,
character; for example: سواقة المرأه , قرار قيادة سيارةFollow collects tweets that are posted by or about a user (not including mentions) from a comma separated list of user IDs (the numeric identifier for a user account). Tweets collected will include those made by the user, retweeting the user, or replying to the user. See the follow parameter documentation for more information.
Location collects tweets that were geolocated within specific parameters, based on a bounding box made using the southwest and northeast corner coordinates. See the location parameter documentation for more information.
Twitter will return a limited number of tweets, so filters that return many results will not return all available tweets. Therefore, more narrow filters will usually return more complete results.
Only one filter or Twitter sample can be run at a time per credential.
SFM captures the filter stream in 30 minute chunks and then momentarily stops. Between rate limiting and these momentary stops, you should never assume that you are getting every tweet.
There is only one seed in a filter collection. Twitter filter collection are either turned on or off (there is no schedule).
Flickr user¶
Flickr User Timeline collections gather metadata about public photos by a specific Flickr user, and, optionally, copies of the photos at specified sizes.
Each Flickr user collection can have multiple seeds, where each seed is a Flickr user. To identify a user, you can provide a either a username or an NSID. If you provide one, the other will be looked up and displayed in the SFM UI during the first harvest. The NSID is a unique identifier and does not change; usernames may be changed but are unique.
Usernames can be difficult to find, so to ensure that you have the correct account, use this tool to find the NSID from the account URL (i.e., the URL when viewing the account on the Flickr website).
Depending on the image sizes you select, the actual photo files will be collected as well. Be very careful in selecting the original file size, as this may require a significant amount of storage. Also note that some Flickr users may have a large number of public photos, which may require a significant amount of storage. It is advisable to check the Flickr website to determine the number of photos in each Flickr user’s public photo stream before harvesting.
For each user, the user’s information will be collected using Flickr’s people.getInfo API and the list of her public photos will be retrieved from people.getPublicPhotos. Information on each photo will be collected with photos.getInfo.
See Incremental collecting to decide whether or not to collect incrementally.
Tumblr blog posts¶
Tumblr Blog Post collections harvest posts by specified Tumblr blogs using the Tumblr Posts API.
Seeds are individual blogs for these collections. Blogs can be specified with or without the .tumblr.com extension.
See Incremental collecting to decide whether or not to collect incrementally.
Weibo timeline¶
Weibo Timeline collections harvest weibos (microblogs) by the user and friends of the user whose credentials are provided using the Weibo friends_timeline API.
Note that because collection is determined by the user whose credentials are provided, there are no seeds for a Weibo timeline collection. To change what is being collected, change the user’s friends from the Weibo website or app.
Weibo search¶
Collects recent weibos that match a search query using the Weibo search_topics API. The Weibo API does not return a complete search of all Weibo posts. It only returns the most recent 200 posts matching a single keyword when found between pairs of ‘#’ in Weibo posts (for example: #keyword# or #你好#)
The incremental option will attempt to only count weibo posts that haven’t been harvested before, maintaining a count of non-duplicate weibo posts. Because the Weibo search API does not accept since_id or max_id parameters, filtering out already-harvested weibos from the search count is accomplished within SFM.
When the incremental option is not selected, the search will be performed again, and there will most likely be duplicates in the count.
Incremental collecting¶
The incremental option is the default and will collect tweets or posts that have been published since the last harvest. When the incremental option is not selected, the maximum number of tweets or posts will be harvested each time the harvest runs. If a non-incremental harvest is performed multiple times, there will most likely be duplicates. However, with these duplicates, you may be able to track changes across time in a user’s timeline, such as changes in retweet and like counts, deletion of tweets, and follower counts.
Data Dictionaries for CSV/Excel Exports¶
Social Feed Manager captures a variety of data from each platform. These data dictionaries give explanations for each selected and processed field in exports.
Note that these are subsets of the data that are collected for each post. The full data is available for export by selecting “Full JSON” as the export format or by exporting from the commandline. See Command-line exporting/processing.
Twitter Dictionary¶
For more info about source tweet data, see the Twitter API documentation, including Tweet data dictionaries.
Documentation about older archived tweets is archived by the Wayback Machine for the Twitter API, Tweets, and Entities.
Tumblr Dictionary¶
For more info about source tweet data, see the Tumblr API documentation, particularly Posts.
Documentation about older archived posts is archived by the Wayback Machine for the original Tumblr API and the newer Tumblr API.
Flickr Dictionary¶
For more info about source tweet data, see the Flickr API documentation, particularly People and Photos.
Documentation about older archived posts is archived by the Wayback Machine here.
Licensing allowed for media, given as a numeral according to the following key:
Appropriateness of post, given as a numeral according to the following key:
Weibo Dictionary¶
For more info about source tweet data, see the Sina Weibo API friends_timeline documentation.
Documentation about older archived tweets is archived by the Wayback Machine here.
Note that for privacy purposes, Weibo dictionary examples are not consistent.
Command-line exporting/processing¶
While social media data can be exported from the SFM UI, in some cases you may want to export from the commandline. These cases include:
To support export and processing from the commandline, SFM provides a processing container. A processing container is a Linux shell environment with access to the SFM’s data and preloaded with a set of useful tools.
Using a processing container requires familiarity with the Linux shell and shell access to the SFM server. If you are interested in using a processing container, please contact your SFM administrator for help.
When exporting/processing data, remember that harvested social media content are stored in
/sfm-data
./sfm-processing
is provided to store your exports, processed data, or scripts. Depending on how it is configured, you may have access to/sfm-processing
from your local filesystem. See Storage.Processing container¶
To bootstrap export/processing, a processing image is provided. A container instantiated from this image is Ubuntu 14.04 and pre-installed with the warc iterator tools,
find_warcs.py
, and some other useful tools. (Warc iterators andfind_warcs.py
are described below.) It will also have read-only access to the data from/sfm-data
and read/write access to/sfm-processing
.The other tools available in a processing container are:
To instantiate a processing container, from the directory that contains your
docker-compose.yml
file:You will then be provided with a bash shell inside the container from which you can execute commands:
Note that once you exit the processing container, the container will be automatically removed. However, if you have saved all of your scripts and output files to
/sfm-processing
, they will be available when you create a new processing container.SFM commandline tools¶
Warc iterators¶
SFM stores harvested social media data in WARC files. A warc iterator tool provides an iterator to the social media data contained in WARC files. When used from the commandline, it writes out the social media items one at a time to standard out. (Think of this as
cat
-ing a line-oriented JSON file. It is also equivalent to the output of Twarc.)Each social media type has a separate warc iterator tool. For example,
twitter_rest_warc_iter.py
extracts tweets recorded from the Twitter REST API. For example:Here is a list of the warc iterators:
twitter_rest_warc_iter.py
: Tweets recorded from Twitter REST API.twitter_stream_warc_iter.py
: Tweets recorded from Twitter Streaming API.flickr_photo_warc_iter.py
: Flickr photosweibo_warc_iter.py
: Weibostumblr_warc_iter.py
: Tumblr postsWarc iterator tools can also be used as a library.
Find Warcs¶
find_warcs.py
helps put together a list of WARC files to be processed by other tools, e.g., warc iterator tools. (It gets the list of WARC files by querying the SFM API.)Here is arguments it accepts:
For example, to get a list of the WARC files in a particular collection, provide some part of the collection id:
(In this case there is only one WARC file. If there was more than one, it would be space separated. Use
--newline
to to separate with a newline instead.)The collection id can be found from the SFM UI.
Note that if you are running
find_warcs.py
from outside a Docker environment, you will need to supply--api-base-url
.Sync scripts¶
Sync scripts will extract Twitter data from WARC files for a collection and write tweets to to line-oriented JSON files and tweet ids to text files. It is called a “sync script” because it will skip WARCs that have already been processed.
Sync scripts are parallelized, allowing for faster processing.
There are sync scripts for Twitter REST collections (twitter_rest_sync.sh) and Twitter stream collections (twitter_stream_sync.sh). Usage is ./<script> <collection id> <destination directory> <# of threads>. For example:
READMEs¶
The exportreadme management command will output a README file that can be used as part of the documentation for a dataset. The README contains information on the collection, including the complete change log. Here is an example of creating a README:
For examples, see the README files in this open dataset.
Note that this is a management command; thus, it is executed differently than the commandline tools described above.
Recipes¶
Extracting URLs¶
The “Extracting URLs from #PulseNightclub for seeding web archiving” blog post provides some useful guidance on extracting URLs from tweets, including unshortening and sorting/counting.
Exporting to line-oriented JSON files¶
This recipe is for exporting social media data from WARC files to line-oriented JSON files. There will be one JSON file for each WARC. This may be useful for some processing or for loading into some analytic tools.
This recipe uses parallel for parallelizing the export.
Create a list of WARC files:
Replace 7c37157 with the first few characters of the collection id that you want to export. The collection id is available on the colllection detail page in SFM UI.
Create a list of JSON destination files:
This command puts all of the JSON files in the same directory, using the filename of the WARC file with a .json file extension.
If you want to maintain the directory structure, but use a different root directory:
Replace sfm-processing/export with the root directory that you want to use.
Perform the export:
Replace twitter_stream_warc_iter.py with the name of the warc iterator for the type of social media data that you are exporting.
You can also perform a filter on export using jq. For example, this only exports tweets in Spanish:
And to save space, the JSON files can be gzip compressed:
You might also want to change the file extension of the destination file to “.json.gz” by adjusting the commmand use to create the list of JSON destination files. To access the tweets in a gzipped JSON file, use:
Counting posts¶
wc -l can be used to count posts. To count the number of tweets in a collection:
To count the posts from line-oriented JSON files created as described above:
wc -l gotcha: When doing a lot of counting, wc -l will output a partial total and then reset the count. The partial totals must be added together to get the grand total. For example:
Using jq to process JSON¶
For tips on using jq with JSON from Twitter and other sources, see:
Releasing public datasets¶
Many social media platforms place limitations on sharing of data collected from their APIs. One common approach for sharing data, in particular for Twitter, is to only share the identifiers of the social media items. Someone can then recreate the dataset be retrieving the items from the API based on the identifiers. For Twitter, the process of extracting tweet ids is often called “dehydrating” and retrieving the full tweet is called “hydrating.”
Note that retrieving the entire original dataset may not be possible, as the social media platform may opt to not provide social media items that have been deleted or are no longer public.
This example shows the steps for releasing the Women’s March dataset to Dataverse. The Women’s March dataset was created by GWU and published on the Harvard Dataverse. These instructions can be adapted for publishing your own collections to the dataset repository of your choice.
Note that the Women’s March dataset is a single (SFM) collection. For an example of publishing multiple collections to a single dataset, see the 2016 United States Presidential Election dataset.
Exporting collection data¶
Access the server where your target collection is located and instantiate a processing container. (See Command-line exporting/processing):
Replace
sfmserver.org
with the address of the SFM server that you want export data from.Find a list of WARC files where the data of your target collection are stored, and create a list of WARC files (source.lst) and a list of destination text files. (dest.lst):
Replace
0110497
with the first few characters of the collection id that you want to export. The collection id is available on the collection detail page in SFM UI. (See the picture below.)Write the tweet ids to the destination text files:
This command executes a Twitter Stream WARC iterator to extract the tweets from the WARC files and jq to extract the tweet ids. This shows using twitter_stream_warc_iter.py for a Twitter stream collection. For a Twitter REST collection, use twitter_rest_warc_iter.py.
Parallel is used to perform this process in parallel (using multiple processors), using WARC files from source.lst and text files from dest.lst. -j 3 limits parallel to 3 processors. Make sure to select an appropriate number for your server.
An alternative to steps 1 and 2 is to use a sync script to write tweet id text files and tweet JSON files in one step. (See Command-line exporting/processing)
Combine multiple files into large files:
The previous command creates a single text file containing tweet ids for each WARC file. To combine the tweets into a single file:
Create a README file that contains information on each collection (management command for sfm ui):
Exit from the processing container, and connect to the UI container and execute the exportreadme management command to create a README file for the dataset:
Copy the files from the server to your local hard drive:
Exit from the SFM server with
exit
command, move to a location in your local hard drive where you want to store the data, and run the command below:Replace
username
andsfmserver.org
with your user ID and the address of the SFM server, respectively.Publishing collection data on Dataverse¶
For this example, we will be adding the collection to the GW Libraries Dataverse on the Harvard Dataverse instance.
Go to the GW Libraries Dataverse and log in.
Open the New Dataset page:
Click ‘Add Data > New Dataset’.
Fill the metadata with proper information (title, author, contact, description, subject, keyword):
Make sure you input the right number of tweets collected and appropriate dates in the description.
Upload the files (both data and README files) and save the dataset:
Publish the dataset:
Go to the page of the draft that was just saved, and click ‘Publish’ button.
Adding link to Dataverse dataset¶
Once you have published your collection data on Dataverse, you can add to it from SFM. This will allow other SFM users to find the public version of your collection.
Citing SFM and datasets¶
Citing SFM¶
The recommended citation for Social Feed Manager (i.e., the software) is:
For more guidance on citing SFM, see SFM in Zenodo.
Citing a public dataset¶
Some SFM collections have been released as public datasets, usually by depositing them in a data repository. (See Releasing public datasets).
Usually the public version will provide guidance on citing. For example, the 2016 United States Presidential Election collection is deposited in Harvard’s Dataverse, which offers the following assistance on citing:
Within SFM, a link may be provided to the public version of a dataset.
Citing your own dataset¶
To make your dataset citable and reusable by others, you are encouraged to release it as public dataset. (See Releasing public datasets). You are also encouraged to cite SFM within your dataset release and your publication.
Installation and configuration¶
Overview¶
The supported approach for deploying SFM is Docker containers. For more information on Docker, see Docker.
Each SFM service will provide images for the containers needed to run the service (in the form of
Dockerfile
s). These images will be published to Docker Hub. GWU created images will be part of the GWUL organization and be prefixed with sfm-.sfm-docker provides the necessary
docker-compose.yml
files to compose the services into a complete instance of SFM.The following will describe how to setup an instance of SFM that uses the latest release (and is suitable for a production deployment.) See the development documentation for other SFM configurations.
SFM can be deployed without Docker. The various
Dockerfile
s should provide reasonable guidance on how to accomplish this.Local installation¶
Installing locally requires Docker and Docker-Compose. See Installing Docker.
Either git clone the sfm-docker repository and copy the example configuration files:
or just download
example.prod.docker-compose.yml
andexample.env
(replacing 2.0.0 with the correct version):Update configuration in
.env
as described in Configuration.Download containers and start SFM:
It may take several minutes for the images to be downloaded and the containers to start. These images are large (roughly 12GB) so make sure you have enough disk space and a high-speed connection is recommended.
It is also recommended that you scale up the Twitter REST Harvester container:
Notes:
Amazon EC2 installation¶
To launch an Amazon EC2 instance running SFM, follow the normal procedure for launching an instance. In Step 3: Configure Instance Details, under Advanced Details paste the following in user details and modify as appropriate as described in Configuration. Also, in the curl statements change master to the correct version, e.g., 2.0.0:
When the instance is launched, SFM will be installed and started.
Note the following:
docker-compose.yml
, you can ssh into the EC2 instance and make changes.docker-compose.yml
and.env
will be in the default user’s home directory.Configuration¶
Configuration is documented in
example.env
. For a production deployment, pay particular attention to the following:SFM_SITE_ADMIN_PASSWORD
,RABBIT_MQ_PASSWORD
, andPOSTGRES_PASSWORD
.DATA_VOLUME
andPROCESSING_VOLUME
settings. Host volumes are recommended for production because they allow access to the data from outside of Docker.SFM_HOSTNAME
andSFM_PORT
appropriately. These are the public hostname (e.g., sfm.gwu.edu) and port (e.g., 80) for SFM.SFM_SMTP_HOST
,SFM_EMAIL_USER
, andSFM_EMAIL_PASSWORD
. (If the configured email account is hosted by Google, you will need to configure the account to “Allow less secure apps.” Currently this setting is accessed, while logged in to the google account, via https://myaccount.google.com/security#connectedapps).TWITTER_CONSUMER_KEY
,TWITTER_CONSUMER_SECRET
,WEIBO_API_KEY
,WEIBO_API_SECRET
, and/orTUMBLR_CONSUMER_KEY
,TUMBLR_CONSUMER_SECRET
. These are optional, but will make acquiring credentials easier for users. For more information and alternative approaches see API Credentials.SFM_SITE_ADMIN_EMAIL
. Problems with SFM are sent to this address.SFM_CONTACT_EMAIL
. Users are provided with this address.SFM_INSTITUTION_NAME
andSFM_INSTITUTION_LINK
.Note that if you make a change to configuration after SFM is brought up, you will need to restart containers. If the change only applies to a single container, then you can stop the container with
docker kill <container name>
. If the change applies to multiple containers (or you’re not sure), you can stop all containers withdocker-compose stop
. Containers can then be brought back up withdocker-compose up -d
and the configuration change will take effect.HTTPS¶
To run SFM with HTTPS:
docker-compose.yml
uncomment the nginx-proxy container and set the paths undervolumes
to point to your certificate and key..env
changeUSE_HTTPS
to True andSFM_PORT
to 8080. Make sure thatSFM_HOSTNAME
matches your certificate.Note:
Stopping¶
To stop the containers gracefully:
SFM can then be restarted with
docker-compose up -d
.Server restarts¶
If Docker is configured to automatically start when the server starts, then SFM will start. (This is enabled by default when Docker is installed.)
SFM will even be started if it was stopped prior to the server reboot. If you do not want SFM to start, then configure Docker to not automatically start.
To configure whether Docker is automatically starts, see Stopping Docker from automatically starting.
Upgrading¶
Following are general instructions for upgrading SFM versions. Always consult the release notes of the new version to see if any additional steps are required.
Stop the containers gracefully:
This may take several minutes.
Make a copy of your existing
docker-compose.yml
and.env
files:Get the latest
example.prod.docker-compose.yml
. If you previously cloned the sfm-docker repository then:otherwise, replacing 2.0.0 with the correct version:
4. If you customized your previous
docker-compose.yml
file, make the same changes in your newdocker-compose.yml
.Make any changes in your
.env
file prescribed by the release notes.Bring up the containers:
It may take several minutes for the images to be downloaded and the containers to start.
Deleting images from the previous version is recommended to prevent Docker from filling up too much space. Replacing 1.5.0 with the correct previous version:
You may also want to periodically clean up Docker (>= 1.13) with
docker system prune
.Server sizing¶
While we have not performed any system engineering analysis of optimal server sizing for SFM, the following are different configurations that we use:
Monitoring¶
There are several mechanisms for monitoring (and troubleshooting) SFM.
For more information on troubleshooting, see Troubleshooting.
Monitor page¶
To reach the monitoring page, click “Monitor” on the header of any page in SFM UI.
The monitor page provides status and queue lengths for SFM components, including harvesters and exporters.
The status is based on the most recent status reported back by each harvester or exporter (within the last 3 days). A harvester or exporter reports its status when it begins a harvest or export. It also reports its status when it completes the harvest or exporter. Harvesters will also provide status updates periodically during a harvest.
Note that if there are multiple instances of a harvester or exporter (created with docker-compose scale), each instance will be listed.
The queue length lists the number of harvest or export requests that are waiting. A long queue length can indicate that additional harvesters or exporters are needed to handle the load (see Scaling up with Docker) or that there is a problem with the harvester or exporter.
The queue length for SFM UI is also listed. This is a queue of status update messages from harvesters or exporters. SFM UI uses these messages to update the records for harvests and exports. Any sort of a queue here indicates a problem.
Logs¶
It can be helpful to peek at the logs to get more detail on the work being performed by a harvester or exporter.
Docker logs¶
The logs for harvesters and exporters can be accessed using Docker’s log commands.
First, determine the name of the harvester or exporter using
docker ps
. In general, the name will be something like sfm_twitterrestharvester_1.Second, get the log with
docker logs <name>
.Add -f to follow the log. For example,
docker logs -f sfm_twitterrestharvester_1
.Add –tail=<number of lines to get the tail of the log. For example,
docker logs --tail=100 sfm_twitterrestharvester_1
.Side note: To follow the logs of all services, use
docker-compose logs -f
.Twitter Stream Harvester logs¶
Since the Twitter Stream Harvester runs multiple harvests on the same host, accessing its logs are a bit different.
First, determine the name of the Twitter Stream Harvester and the container id using
docker ps
. The name will probably be sfm_twitterstreamharvester_1 and the container id will be something like bffcae5d0603.Second, determine the harvest id. This is available from the harvest’s detail page.
Third, get the stdout log with
docker exec -t <name> cat /sfm-data/containers/<container id>/log/<harvest id>.out.log
. To get the stderr log, substitute .err for .out.To follow the log, use tail -f instead of cat. For example,
docker exec -t sfm_twitterstreamharvester_1 tail -f /sfm-data/containers/bffcae5d0603/log/d4493eed5f4f49c6a1981c89cb5d525f.err.log
.RabbitMQ management console¶
The RabbitMQ Admin is usually available on port 15672. For example, http://localhost:15672/.
Administration¶
Designated users have access to SFM UI’s Django Admin interface by selecting Welcome > Admin on the top right of the screen. This interface will allow adding, deleting, or changing database records for SFM UI. Some of the most salient uses for this capability are given below.
Managing groups¶
To allow for multiple users to control a collection set:
Deactivating collections¶
Deactivating a collection indicates that you have completed collecting data for that collection. Deactivated collections will be removed from some of the lists in SFM UI and will not appear in the harvest status emails.
Collections can be deactivated using the “Deactivate” button on the collection detail page.
Note:
Sharing collections¶
Changing the visibility of a collection to “Other users” will allow the collection to be viewed by all SFM users.
The visibility of a collection can be changed by editing the collection.
Note: * A collection set is shared when it has a shared collection. * Shared collection sets will be listed on a separate tab of the collection set list page.
Deleting items¶
Records can be deleted using the Admin Interface. It is recommended to minimize deletion; in particular, collections should be turned off and seeds made inactive.
Note the following when deleting:
Moving collections¶
Collections can be moved from one collection set to another. This is done by changing the collection set for the collection in the Admin Interface.
Note the following when moving collections:
Allowing access to Admin Interface¶
To allow a user to have access to the Admin Interface, give the user Staff status or Superuser status. This is done from the user’s admin page.
Docker¶
This page contains information about Docker that is useful for installation, administration, and development.
Installing Docker¶
Docker Engine and Docker Compose
On OS X:
On Ubuntu:
apt
install, try thepip
install./etc/group
.While Docker is available on other platforms (e.g., Windows, Red Hat Enterprise Linux), the SFM team does not have any experience running SFM on those platforms.
Helpful commands¶
docker-compose up -d
docker-compose pull
docker-compose build
Build images for all of the containers specified in the docker-compose.yml file with the build field. Add--no-cache
docker ps
-a
to also list stopped containers.docker-compose kill
docker kill <container name>
docker-compose rm -v --force
docker rm -v <container name>
docker rm $(docker ps -a -q) -v
docker-compose logs
-f
to follow the logs.docker logs <container name>
-f
to follow the logs.docker-compose -f <docker-compose.yml filename> <command>
docker exec -it <container name> /bin/bash
docker rmi <image name>
docker rmi $(docker images -q)
docker-compose scale <service name>=<number of instances>
Scaling up with Docker¶
Most harvesters and exporters handle one request at a time; requests for exports and harvests queue up waiting to be handled. If requests are taking too long to be processed you can scale up (i.e., create additional instances of) the appropriate harvester or exporter.
To create multiple instances of a service, use docker-compose scale.
The harvester most likely to need scaling is the Twitter REST harvester since some harvests (e.g., broad Twitter searches) may take a long time. To scale up the Twitter REST harvester to 3 instances use:
To spread containers across multiple containers, use Docker Swarm.
Using compose in production provides some additional guidance.
Stopping Docker from automatically starting¶
Docker automatically starts when the server starts. To control this:
Ubuntu 14 (Upstart)¶
Stop Docker from automatically starting:
Allow Docker to automatically start:
Manually start Docker:
Ubuntu 16 (Systemd)¶
Stop Docker from automatically starting:
Allow Docker to automatically start:
Manually start Docker:
Collection set / Collection portability¶
Overview¶
Collections and collection sets are portable. That means they can be moved to another SFM instance or to another environment, such as a repository. This can also be used to backup an SFM instance.
A collection includes all of the social media items (stored in WARCs) and the database records for the collection sets, collections, users, groups, credentials, seeds, harvests, and WARCs, as well as the history of collection sets, collections, credentials, and seeds. The database records are stored in JSON format in the
records
subdirectory of the collection. Each collection has a complete set of JSON database records to support loading it into a different SFM instance.Here are the JSON database records for an example collection:
Thus, moving a collection set only requires moving/copying the collection set’s directory; moving a collection only requires moving/copying a collection’s directory. Collection sets are in
/sfm-data/collection_set
and are named by their collection set ids. Collections are subdirectories of their collection set and are named by their collection ids.A
README.txt
is automatically created for each collection and collection set. Here aREADME.txt
for an example collection set:Preparing to move a collection set / collection¶
Nothing needs to be done to prepare a collection set or collection for moving. The collection set and collection directories contain all of the files required to load it into a different SFM instance.
The JSON database records are refreshed from the database on a nightly basis. Alternatively, they can be refreshed used the
serializecollectionset
andserializecollection
management commands:Loading a collection set / collection¶
Move/copy the collection set/collection to
/sfm-data/collection_set
. Collection sets should be placed in this directory. Collections should be placed into a collection set directory.Execute the
deserializecollectionset
ordeserializecollection
management command:Note:
Moving an entire SFM instance¶
docker-compose stop
./sfm-data
directory from the source server to the destination server./sfm-processing
directory from the source server to the destination server.docker-compose.yml
and.env
files from the source server to the destination server..env
file, e.g.,SFM_HOSTNAME
.docker-compose up -d
.If moving between AWS EC2 instances and
/sfm-data
is on a separate EBS volume, the volume can be detached from the source EC2 instances and attached to the destination EC2 instance.Storage¶
Storage volumes¶
SFM stores data on 2 volumes:
Volume types¶
There are 2 types of volumes:
The type of volume is specified in the .env file. When selecting a link to a host location, the path on the host environment must be specified:
We recommend that you use an internal volume only for development; for other uses linking to a host location is recommended. This make it easier to place the data on specific storage devices (e.g., NFS or EBS) and to backup the data.
File ownership¶
SFM files are owned by the sfm user (default uid 990) in the sfm group (default gid 990). If you use a link to a host location and list the files, the uid and gid may be listed instead of the user and group names.
If you shell into a Docker container, you will be the root user. Make sure that any operations you perform will not leave behind files that do not have appropriate permissions for the sfm user.
Note then when using Docker for Mac and linking to a host location, the file ownership may not appear as expected.
Directory structure of sfm-data¶
The following is a outline of the structure of sfm-data:
Space warnings¶
SFM will monitor free space on sfm-data and sfm-processing. Administrators will be notified when the amount of free space crosses a configurable threshold. The threshold is set in the .env file:
Moving from a Docker internal volume to a linked volume¶
These instructions are for Ubuntu. They may need to be adjusted for other operating systems.
Stop docker containers:
Copy sfm-data contents from inside the container to a linked volume:
Set ownership:
Change .env:
Restart containers:
Limitations and Known Issues¶
To make sure you have the best possible experience with SFM, you should be aware of the limitations and known issues:
For a complete list of tickets, see https://github.com/gwu-libraries/sfm-ui/issues
In addition, you should be aware of the following:
Troubleshooting¶
General tips¶
docker ps
.docker-compose logs
anddocker logs <container name>
..env
.Specific problems¶
Skipped harvests¶
A new harvest will not be requested if the previous harvest has not completed. Instead, a harvest record will be created with the status of skipped. Some of the reasons that this might happen include:
After correcting the problem to resume harvesting for a collection, void the last (non-skipped) harvest. To void a harvest, go to that harvest’s detail page and click the void button.
Connection errors when harvesting¶
If harvests from a container fail with something like:
then stop and restart the container. For example:
Bind error¶
If when bringing up the containers you receive something like:
it means another application is already using a port configured for SFM. Either shut down the other application or choose a different port for SFM. (Chances are the other application is Apache.)
Bad Request (400)¶
If you receive a Bad Request (400) when trying to access SFM, your
SFM_HOST
environment variable is not configured correctly. For more information, see ALLOWED_HOSTS.Social Network Login Failure for Twitter¶
If you receive a Social Network Login Failure when trying to connect a Twitter account, make sure that the Twitter app from which you got the Twitter credentials is configured with a callback URL. The URL should be http://<SFM hostname>/accounts/twitter/login/callback/.
If you have made a change to the credentials configured in
.env
, try deleting twitter from Social Applications in the admin interface and restarting SFM UI (docker-compose stop ui
thendocker-compose up -d
).Docker problems¶
If you are having problems bringing up the Docker containers (e.g.,
driver failed programming external connectivity on endpoint
), restart the Docker service. On Ubuntu, this can be done with:CSV export problems¶
Excel for Mac has problems with unicode characters in CSV files. As a work-around, export to Excel (XLSX) format.
Still stuck?¶
Contact the SFM team. We’re happy to help.
Development¶
Setting up a development environment¶
SFM is composed of a number of components. Development can be performed on each of the components separately.
For SFM development, it is recommended to run components within a Docker environment (instead of directly in your OS, without Docker).
Step 1: Install Docker and Docker Compose¶
See Installing Docker.
Step 2: Clone sfm-docker and create copies of docker-compose files¶
For example:
For the purposes of development, you can make changes to
docker-compose.yml
and.env
. This will be described more below.Step 3: Clone the component repos¶
For example:
Repeat for each of the components that you will be working on. Each of these should be in a sibling directory of sfm-docker.
Running SFM for development¶
To bring up an instance of SFM for development, change to the sfm-docker directory and execute:
You may not want to run all of the containers. To omit a container, simply comment it out in
docker-compose.yml
.By default, the code that has been committed to master for each of the containers will be executed. To execute your local code (i.e., the code you are editing), you will want to link in your local code. To link in the local code for a container, uncomment the volume definition that points to your local code. For example:
sfm-utils and warcprox are dependencies of many components. By default, the code that has been committed to master for sfm-utils or warcprox will be used for a component. To use your local code as a dependency, you will want to link in your local code. Assuming that you have cloned sfm-utils and warcprox, to link in the local code as a dependency for a container, change
SFM_REQS
in.env
to “dev” and comment the volume definition that points to your local code. For example:Note: * As a Django application, SFM UI will automically detect code changes and reload. Other components must be killed and brought back up to reflect code changes.
Running tests¶
Unit tests¶
Some components require a
test_config.py
file that contains credentials. For example, sfm-twitter-harvester requires atest_config.py
containing:Note that if this file is not present, unit tests that require it will be skipped. Each component’s README will describe the
test_config.py
requirements.Unit tests for most components can be run with:
The notable exception is SFM UI, which can be run with:
Integration tests¶
Many components have integration tests, which are run inside docker containers. These components have a
ci.docker-compose.yml
file which can be used to bring up a minimal environment for running the tests.As described above, some components require a
test_config.py
file.To run integration tests, bring up SFM:
Run the tests:
You will need to substitute the correct name of the container. (
docker ps
will list the containers.)And then clean up:
For reference, see each component’s
.travis.yml
file which shows the steps of running the integration tests.Smoke tests¶
sfm-docker contains some smoke tests which will verify that a development instance of SFM is running correctly.
To run the smoke tests, first bring up SFM:
wait, and then run the tests:
Note that the smoke tests are not yet complete and require test fixtures that are only available in a development deploy.
For reference, the continuous integration deploy instructions shows the steps of running the smoke tests.
Requirements files¶
This will vary a depending on whether a project has warcprox and sfm-utils as a dependency, but in general:
requirements/common.txt
contains dependencies, except warcprox and sfm-utils.requirements/release.txt
references the last released version of warcprox and sfm-utils.requirements/master.txt
references the master version of warcprox and sfm-utils.requirements/dev.txt
references local versions of warcprox and sfm-utils in development mode.To get a complete set of dependencies, you will need
common.txt
and eitherrelease.txt
,master.txt
ordev.txt
. For example:Development tips¶
Admin user accounts¶
Each component should automatically create any necessary admin accounts (e.g., a django admin for SFM UI). Check
.env
for the username/passwords for those accounts.RabbitMQ management console¶
The RabbitMQ management console can be used to monitor the exchange of messages. In particular, use it to monitor the messages that a component sends, create a new queue, bind that queue to sfm_exchange using an appropriate routing key, and then retrieve messages from the queue.
The RabbitMQ management console can also be used to send messages to the exchange so that they can be consumed by a component. (The exchange used by SFM is named sfm_exchange.)
For more information on the RabbitMQ management console, see RabbitMQ.
Blocked ports¶
When running on a remote VM, some ports (e.g., 15672 used by the RabbitMQ management console) may be blocked. SSH port forwarding can help make those ports available.
Django logs¶
Django logs for SFM UI are written to the Apache logs. In the docker environment, the level of various loggers can be set from environment variables. For example, setting SFM_APSCHEDULER_LOG to DEBUG in the docker-compose.yml will turn on debug logging for the apscheduler logger. The logger for the SFM UI application is called ui and is controlled by the SFM_UI_LOG environment variable.
Apache logs¶
In the SFM UI container, Apache logs are sent to stdout/stderr which means they can be viewed with docker-compose logs or docker logs <container name or id>.
Initial data¶
The development and master docker images for SFM UI contain some initial data. This includes a user (“testuser”, with password “password”). For the latest initial data, see fixtures.json. For more information on fixtures, see the Django docs.
Runserver¶
There are two flavors of the the development docker image for SFM UI. gwul/sfm-ui:master runs SFM UI with Apache, just as it will in production. gwul/sfm-ui:master-runserver runs SFM UI with runserver, which dynamically reloads changed Python code. To switch between them, change UI_TAG in .env.
Note that as an byproduct of how runserver dynamically reloads Python code, there are actually 2 instances of the application running. This may produce some odd results, like 2 schedulers running. This will not occur with Apache.
Job schedule intervals¶
To assist with testing and development, a 5 minute interval can be added by setting SFM_FIVE_MINUTE_SCHEDULE to True in the docker-compose.yml.
Connecting to the database¶
To connect to postgres using psql:
You will be prompted for the password, which you can find in .env.
Docker tips¶
Building vs. pulling¶
Containers are created from images. Images are either built locally or pre-built and pulled from Docker Hub. In both cases, images are created based on the docker build (i.e., the Dockerfile and other files in the same directory as the Dockerfile).
In a docker-compose.yml, pulled images will be identified by the image field, e.g., image: gwul/sfm-ui:master. Built images will be identified by the build field, e.g., build: app-dev.
In general, you will want to use pulled images. These are automatically built when changes are made to the Github repos. You should periodically execute docker-compose pull to make sure you have the latest images.
You may want to build your own image if your development requires a change to the docker build (e.g., you modify fixtures.json).
Killing, removing, and building in development¶
Killing a container will cause the process in the container to be stopped. Running the container again will cause process to be re-started. Generally, you will kill and run a development container to get the process to be run with changes you’ve made to the code.
Removing a container will delete all of the container’s data. During development, you will remove a container to make sure you are working with a clean container.
Building a container creates a new image based on the Dockerfile. For a development image, you only need to build when making changes to the docker build.
Writing a harvester¶
Requirements¶
Suggestions¶
Notes¶
Messaging¶
RabbitMQ¶
RabbitMQ is used as a message broker.
The RabbitMQ managagement console is exposed at
http://<your docker host>:15672/
. The username issfm_user
. The password is the value ofRABBITMQ_DEFAULT_PASS
insecrets.env
.Publishers/consumers¶
mq
and the port is 5672.rabbit
. See appdeps.py for docker application dependency support.Exchange¶
sfm_exchange
is a durable topic exchange to be used for all messages. All publishers/consumers must declare it.:Queues¶
All queues must be declared durable.:
Messaging Specification¶
Introduction¶
SFM is architected as a number of components that exchange messages via a messaging queue. To implement functionality, these components send and receive messages and perform certain actions. The purpose of this document is to describe this interaction between the components (called a “flow”) and to specify the messages that they will exchange.
Note that as additional functionality is added to SFM, additional flows and messages will be added to this document.
General¶
Harvesting social media content¶
Harvesting is the process of retrieving social media content from the APIs of social media services and writing to WARC files.
Background information¶
Flow¶
The following is the flow for a harvester performing a REST harvest and creating a single warc:
The following is the message flow for a harvester performing a stream harvest and creating multiple warcs:
Messages¶
Harvest start message¶
Harvest start messages specify for a harvester the details of a harvest. Example:
Another example:
Harvest stop message¶
Harvest stop messages tell a harvester perform a stream harvest to stop. Example:
Harvest status message¶
Harvest status messages allow a harvester to provide information on the harvests it performs. Example:
Warc created message¶
Warc created message allow a harvester to provide information on the warcs that are created during a harvest. Example:
Exporting social media content¶
Exporting is the process of extracting social media content from WARCs and writing to export files. The exported content may be a subset or derivate of the original content. A number of different export formats will be supported.
Background information¶
Flow¶
The following is the flow for an export:
Export start message¶
Export start messages specify the requests for an export. Example:
Another example:
Export status message¶
Export status messages allow an exporter to provide information on the exports it performs. Example: