I have data set and set of relevant questions.
I would like to get help from someone quickly by today to complete my assignment via team viewer or webex.
I have done basic cloudera setup, I have the dataset.
I need some help on apache kafka abd Use Apache Flume to consume messages,o Use Spark Streaming to consume messages
Append UUID and timestamp using Pig Latin.
Cleanse the data (trim, null, removing duplicates) and load it in Parquet format as modelled using Spark/Scala
• Model in Elasticsearch (Index, Type, Partition)
• Consume the data from Kafka topic and load in to speed (Elasticsearch) using Spark streaming
Read data from Elastic search Index and answer the relevant questions
Hello,
I am a big data expert and I have gone through the basic description of the project and understand the requirements.
I can help you and provide the desired outcome.
Relevant Skills and Experience
Big Data Hadoop, Elastic search, Hive, HBase, Pig
Proposed Milestones
₹1300 INR - Big data expert