Job offer

< Back

Data Engineer

Salary:
1400 - 1600 pln/md + vat
Type of employment:
B2B
Date:
2025.04.09
Location:
Kraków
Offer
  • We are open to the employment form according to your preferences 
  • Work with experienced and engaged team, willing to learn, share knowledge and open for growth and new ideas 
  • Hybrid 2 days Office in Kraków / 3 days Remote
  • Mindbox is a dynamically growing IT company, but still not a large one – everybody can have a real impact on where we are going next
  • We invest in developing skills and abilities of our employees
  • We have attractive benefits and provide all the tools required for work f.e. computer
  • Interpolska Health Care, Multisport, Warta Insurance, training platform (Sages) 

Creating an inspiring place to thrive for the talented, we use their expertise and courage to introduce the technology of the future into your business. - This is the foundation of Mindbox and the goal of our business and technology journey. We operate and develop in four areas:

🤖 Autonomous Enterprise - automation of business processes using RPA, OCR, and AI.

🌐Business Managment Systems ERP - we implement, adapt, optimize, and maintain flexible, safe, and open ERP of production and distribution companies worldwide.

🤝Talent Network - we provide access to the best specialists.

☁️ Modern Architecture - we build integrated, sustainable, and open CI / CD environments based on containers enabling safe and more frequent delivery of proven changes in the application code.

We treat technology as a tool to achieve a goal. Thanks to our consultants' reliability and proactive approach, initial projects usually become long-term cooperation. For over 16 years, it has provided various services to support clients in digital transformation.

#LI-Hybrid

 

Tasks

As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:

  • Promoting development standards, code reviews, mentoring, knowledge sharing
  • Production support & troubleshooting.
  • Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring
  • Liaison with BAs to ensure that requirements are correctly interpreted and implemented.
  • Participation in regular planning and status meetings. Input to the development process – through the involvement in
  • Sprint reviews and retrospectives. Input into system architecture and design.
Requirements
  • Total 5+ years’ experience with software design, Pyspark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment
  • Pyspark or Scala development and design.
  • Experience using scheduling tools such as Airflow.
  • Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services).
  • Sound knowledge on working Unix/Linux Platform
  • Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL.
  • Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA.
  • Understanding of big data modelling techniques using relational and non-relational techniques
  • Experience on debugging code issues and then publishing the highlighted differences to the development team.

 

The successful candidate will also meet the following requirements: (Good to have Requirements)

  • Experience with Elastic search.
  • Experience developing in Java APIs.
  • Experience doing ingestions.
  • Understanding or experience of Cloud design patterns
  • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.