552293 Twitter Search API

W Toku Opublikowano Feb 18, 2012 Płatność przy odbiorze
W Toku Płatność przy odbiorze

Twitter Search API

I am seeking a fairly simple Twitter Search API that performs the following functions

Per [url removed, login to view] using Twitter Developer best practices.

Overview: Develop a web-based interface to query twitter for subject matter of interest, returning the results to the user either one time or continuously and saving the results for future analysis.

Functionality: The user visits a website hosted on my server and is shown a clean and simple user interface (UI). The UI has fields to search twitter posts by ALL search parameters allowed using the twitter API. The default location is US. The User inputs query fields including keywords, location, etc (use pull down menus when appropriate). The User clicks a Search button, which launches the API. The program queries Twitter and returns the search results including the tweets and all associated metadata (twitter user name, links, profile pic, etc) on the browser screen below the search query input area of the page. The format should be one full tweet ordered most recent on top, older tweets continue below. Limit tweets shown on screen to 50 at a time, let the user decide to "show all".

The user is then given the option to 1. Download the search results in CSV format to the local PC, 2. Cache the File on the server ONE TIME, 3. Cache the File CONTINUOUSLY, or 4. Modify the Query, or 5. Clear and Start Over. (see below for definitions)

1. Download the search results in CSV format to the Local PC - one click download of all the results (not just the first 50) in CSV format. Name the file using the search terms (boston+friday+US+[url removed, login to view] for example - or whatever separator is friendly to linux and pcs. Your choice.

2. Cache the file on the server One Time - the query results are cached on my server in CSV format in a Log directory in a file named dynamically using the search query [url removed, login to view] as above. If this is selected, after it is cached, indicate to the user "Successfully Cached to the Server as "file name".csv so the user knows the operation was a success.

3. Cache the File Continuously - when this option is selected, a pull down menu offers the options of Rate Limit: 1/24 hours, 1/hr, 10/hr minutes meaning run the query again and APPEND the new results to the cached log file: once every 24 hours; once every hour, or; ten times an hour. A second option must be selected by the user - End Date in the format 01/01/2012. The cache will stop running the query at GMT -5 (New York, NY Eastern Time) at 24:00 hours (the end of the day). You will need to Cache the query on my server. Indicate to the user "Successfully Cached Continuously to the Server as "file name".csv so the user knows the operation was a success.

4. Modify the Query - clears the search results but leaves the user inputed fields in place.

5. Clear and Start Over - clears the search results and the user inputed fields.

At the bottom of the page, the user should be able to see a list of the cached log files set to run continuously on the server. By clicking on the file name, the current cached log file should download to the user's PC.

I will provide root access to a HostGator Linux hosting account and you will need to configure the Linux server, programs, and databases to develop and run the program as needed. Provide a [url removed, login to view] file that explains the program and its functionality. The readme file plus all documentation, programs, and source code will be installed and left on my server.

That's it! By the way, I am looking for the same thing to search Google+, Facebook, LinkedIn, and Google (search). If you have those skills. Let me know and I will post another project and give you a private invitation to bid on each of those.

Thanks for looking!

Blog Marketing na Facebooku Linkedin Linux Odd Jobs SEO Serwisy społecznościowe Twitter Administracja serwisami WWW Moodle

Numer ID Projektu: #2298239

O projekcie

Zdalny projekt Aktywny Jul 11, 2012