PySpark Experts do zatrudnienia w Australia
Wyświetlono 3 wyników
Wykonawcy sponsorowani
-
RDBMS | Big Data | Data Engineering | Data Modelling | AWS | Python | Unix | Hive | Hadoop | Talend | Spark | Bigquery | DataformRDBMS | Big Data | Data Engineering | Data Modelling | AWS | Python | Unix | Hive | Hadoop | Talend | Spark | Bigquery | Dataform mniej
Zatrudnij navneethshetty
-
I have 1 year of experience as a data engineer, where I worked in the development and maintenance of relational and non-relational databases, including MySQL and MongoDB. Additionally, I have experience creating ETL workflows to automate the loading and processing of large volumes of data. During my work experience,...I have 1 year of experience as a data engineer, where I worked in the development and maintenance of relational and non-relational databases, including MySQL and MongoDB. Additionally, I have experience creating ETL workflows to automate the loading and processing of large volumes of data. During my work experience, I also collaborated with the data analysis team to develop custom reports and visualizations. Implemented security solutions to protect sensitive data and worked with the data scientist team to develop predictive models based on large volumes of data. I also developed custom visualizations and reports using tools like Tableau and Power BI. In addition to my work experience, I have a strong academic background in mathematics (MATLAB), which has given me a deep understanding of statistics and modeling techniques. As a result, I am able to apply my knowledge in mathematics and data analysis to help companies make informed decisions based on accurate and reliable data. mniej
Zatrudnij Science21
-
As an ML Data Engineer, I design and implement scalable data pipelines, transforming raw data into valuable insights. I leverage cloud technologies like AWS, specializing in optimizing PySpark code for efficient data processing. With expertise in database administration (Oracle, Redshift), I excel in ETL and batch...As an ML Data Engineer, I design and implement scalable data pipelines, transforming raw data into valuable insights. I leverage cloud technologies like AWS, specializing in optimizing PySpark code for efficient data processing. With expertise in database administration (Oracle, Redshift), I excel in ETL and batch data processing, ensuring seamless migration to cloud environments. My role extends to deploying ETL pipelines, optimizing SQL queries, and managing production support for cloud applications. I hold certifications in AWS and Azure, with a track record of delivering projects under pressure. My work combines technical skills with a strategic approach, driving impactful outcomes in data engineering. mniej
Zatrudnij Akshat10023