Algorithm in hadoop and spark (2 versions)
$30-250 USD
Płatność przy odbiorze
Hi!
Looking for developer to help me with porting algorithm (details via email) to hadoop and to spark. More or less idea is to compute formula similar to Pearson correlation (formula in files as image).
Payment is open for discussion.
some clarification needed:
one csv file with structure of item_id, item_name, item_group
second one: user_id, item_id, rating
the algorithm:
for upfront stated user_id (user_0)
divides input from second file by item_group
for each group
calculates Pearson correlation between user_0 and rest of users (considering ratings only from this specific group)
produces top N most correlated users (N can be as const in the program)
from that top N list calculates averages for each item rating and returns top M items (M can be const as well)
output: topM items for each group
Numer ID Projektu: #22190052
O projekcie
Przyznany użytkownikowi:
This is Mohanraj V. I have completed Master of Computer Applications.I have 6 years of IT experience in Data scientist and Hadoop development including Python,HDFS,HIVE, PySpark ,HBase and Experienced in SQL and PL/SQ Więcej
7 freelancerów złożyło ofertę za $198 w tym projekcie
Hi, I am a bigdata developer and a module lead in reputed MNC.i an into the IT industry for more then 12 years. I have tonnes of experience in developing projects using Java,Apache Spark,Hive,Kafka,Scoop,Pig,Scala,aws Więcej
HI I am experienced in Java Hadoop Spark etc I can start right now but i have few doubts and questions lets have a quick chat and get it started waiting for your reply
Expertise in all components of Hadoop Ecosystem- Hive, Hue, Pig, Sqoop, HBase, Flume, Zookeeper, Oozie, Apache Flink and Apache Spark. Responsible for writing MapReduce programs using Java. Logical Implementation Więcej
I am a lead data engineer and develop complex Spark applications on Hadoop on a daily basis. once I have full requirements I will deliver within 3 days