DBS Data & Machine Learning Engineering Virtual Hiring Event 2021 Hyderabad

DBS Data & Machine Learning Engineering Virtual Hiring Event 2021 Hyderabad: At DBS, we see ourselves as a 29,000-person start-up, that leverages innovation, embraces an Agile culture, and adopts the latest technology to design and develop superior solutions.

We seek to identify top Data & Machine Learning Engineering talents to join us to power the next stage of DBS’ digital transformation and reimagine banking. We place our customers at the heart of everything we do and are committed to offering differentiated and exceptional services empowered by technology. With a strong culture of innovation, by experimenting with new technology and collaborating with the FinTech community, we aim to simplify banking and help others Live More, Bank Less.

We are looking for Data & Machine Learning Engineering talents across various levels. Join us to as we continue to drive digital transformation to reimagine banking.

Based in Hyderabad, you will leverage new technology to create solutions that provide superior client experience and are truly saleable.

Participate in DBS Data & Machine Learning Engineering Virtual Hiring Event 2021 for an opportunity to be a part of a culture that fosters a truly agile mode of continuous experimentation and delivery, while making the impossible, possible.

Do you have?

  • A passion for technology and continuous learning mindset
  • Willingness to adapt, learn and collaborate
  • Strong communication and Problem-solving skills; help clients and teammates make good decisions
  • An ability to be self-organized and see the big picture
  • Ability to understand the complex business processes and propose self learning
  • Passion towards automate where appropriate and beneficial

Are you familiar with

  • Data Engineering
  • Machine Learning Engineering
  • Site Reliability Engineering
  • Data Solution Architecture

Data Engineer

  • Understand non-functional requirements and the ability to turn them into Functional/Technical requirements.  Then apply the architectural principles to leverage the Spark Framework to reproduce specific use cases and/or features’ inputs for models.
  • Passion towards automation where appropriate and beneficial
  • Understand Data acquisition, Develop data set processes, knowledge of data concepts
  • Understand the complex transformation logic and translate them to Spark code or Spark-SQL queries
  • Hadoop eco system (Spark, Hive, HBase, YARN and Kafka) , Spark Core, Spark-SQL and live streaming datasets using Kafka/Spark Streaming
  • Unix Shell Scripting and knowledge of Apache Airflow or any Job scheduler
  • Cloud computing technologies like AWS or GCP
  • Cloudera distribution framework, Jenkins (or any version controller)
  • Different file format like AVRO, ORC, Parquet, JSON
  • Excellent understanding of technology life cycles and the concepts and practices required to build big data solutions
  • Data Warehouse and Models Feature mart concepts
  • Core Java with experience in Micro services architecture
  • Ability to understand and build re-usable data assets or features to enable downstream data science models to read and use

Machine Learning Engineer

  • Ability to understand the complex business processes and propose self-learning & predictive ML models
  • Ability to solve complex problems with multi-layered (deep) data sets
  • Extensive Analytical modeling skills (decision tree, nearest neighbor, neural net, support vector machine, ensemble of multiple models, etc.)
  • Experience in building self-learning models
  • Machine learning frameworks such as TensorFlow or Keras
  • Knowledge of Hadoop, Spark or any other distributed computing systems
  • Advanced math skills (linear algebra, Bayesian statistics, group theory)
  • Knowledge of Machine Learning Algorithms and Libraries
  • Working with version control system
  • Understanding of distributed eco system
  • Spark Core, Spark-SQL, Scala-Programming and Streaming datasets in Big Data platform
  • Programming experience in Python, R , Scala or Java
  • Excellent understanding of technology life cycles and the concepts and practices required to build big data solutions

Site Reliability Engineer

  • Ability to explore data and troubleshooting skills across infrastructure and applications
  • Passion towards automation where appropriate and beneficial
  • Familiar with Toil Automation
  • Understand the end-to-end complexity of Big Data systems
  • Experience in Hadoop eco system (Spark, Hive, HBase, YARN and Kafka) , Spark Core, Spark-SQL and live streaming datasets using Kafka/Spark Streaming
  • Linux Shell Scripting and YAML
  • Cloud computing technologies like AWS or GCP
  • Cloudera distribution framework, Jenkins (or any version controller) .Use error budgets and apply automation for remediation
  • Ability to automate scrips, robotic process automation, and virtual engineers
  • Infrastructure x86 and Object Storage
  • Openshift Containers and Kubernetes
  • Rhel, Tomcat, Apache

Solution Architect

  • Hadoop frameworks such as Hive, Sqoop, Impala, and Spark
  • Hands on experience in SQL. Familiarity with data loading and workflow tools like Flume, Sqoop, Oozie
  • Strong knowledge of Big data, analytics, relational database systems and data warehousing techniques
  • Working knowledge of end-user BI tools and solutions (Apache super set, Tablue, QlikView)
  • Strong listening and communication skill set, with ability to clearly and concisely explain complex problems and technologies to non-expert and executive audiences
  • NoSQL databases like HBase, Cassandra, Redis and MongoDB
  • Hadoop ecosystem technologies like Flume
  • RDBMS lik Teradata, Oracle, SQL Server
  • Certifications from AWS, GCP, Cloudera, HortonWorks and/or MapR
  • Knowledge of Java SE, Java EE, JMS, XML, XSL, Web Services and other application integration related technologies, a plus
  • Extensive practical experience and applied knowledge in some of the following areas: large data volumes (multi-TB datasets), large scale data transformation, enterprise application design, optimizing database models, complex SQL transformations on large data volumes, complex event processing (CEP) pipelines, cloud computing models, Enterprise Integration

How to apply for DBS Data & Machine Learning Engineering Virtual Hiring Event 2021 Hyderabad?

How the DBS Data & Machine Learning Engineering Virtual Hiring Event will work

1. Take our Online Assessment

Test your abilities through our tech assessment challenge on HackerRank (to be taken on a computer).

Deadline: 7 JAN 2021, 11:59 (GMT +8)

2. Get Selected

Shortlisted candidates will be notified by email to join the virtual hiring event for a role based in Hyderabad.

3. Participate in our Virtual Hiring Event

Hear from our Technology Role Models, attend virtual interviews and get closer to be part of our team!

DATE: 6th Feb 2021
(Event details will be shared by email with the shortlisted candidates)

Selected individuals from the event may be interviewed further for positions at DBS Hyderabad immediately after the event (Terms and conditions apply).

Apply Link/Registration Link of DBS Virtual Off Campus Drive: Click Here

Source Link: Click Here

Important Note: Candidate must read all the instructions and requirements carefully while applying for the job. You have to fill all the required fields and all the communications from the company will be on your registered Email ID. Keep checking your email for the next round once the Resume is shortlisted.

Seekajob is a job-sharing platform for all Job Seekers. We do not charge any cost and service fee for any job which is posted on our website, neither we have authorized anyone to do the same. We are providing job links from the careers pages of the organizations. Applicants are advised to check all the details when they are applying for the job to avoid any inconvenience.