Job Position:
Technical Architect - OLTP DB
Job Type:
Full-Time
Location:
Kolkata, IN
Experience:
- 12+ to 17 Years
Must Have:
- Data Architecture;
RDBMS;
Python;
Apache Spark Python;
Any Cloud
Key Responsibilities:
Architectural Design:
Lead the design and implementation of scalable and high-performance OLTP database systems. Develop data architecture strategies to meet business needs and ensure data integrity, availability, and security.
Database Management:
Oversee the configuration, tuning, and optimization of RDBMS (Relational Database Management Systems) to handle high transaction volumes efficiently. Ensure databases meet performance, reliability, and scalability requirements.
Data Integration:
Collaborate with data engineers and developers to integrate various data sources into the OLTP environment. Use Apache Spark with Python to facilitate data processing and transformation tasks.
Cloud Solutions:
Design and implement cloud-based database solutions using any cloud platform (e.G., AWS, Azure, Google Cloud) to support data management and analytics.
Technical Leadership:
Provide technical guidance and mentorship to development teams on best practices for database design, data management, and the use of Python for data processing.
Innovation & Improvement:
Stay current with emerging technologies and industry trends. Propose and implement enhancements to improve database performance and data handling capabilities.
Documentation & Compliance:
Maintain thorough documentation of database architectures, designs, and configurations. Ensure compliance with industry standards and regulatory requirements.
Required Skills and Qualifications:
Data Architecture:
Strong expertise in designing data architectures for OLTP systems, including experience with data modeling, normalization, and transaction management.
RDBMS Expertise:
Proven experience with major RDBMS platforms (e.G., PostgreSQL, MySQL, Oracle, SQL Server), including performance tuning and optimization.
Python Programming:
Proficiency in Python, with a focus on using it for data processing, automation, and integration tasks.
Apache Spark with Python:
Hands-on experience with Apache Spark, particularly with PySpark, for large-scale data processing and transformation.
Cloud Platforms:
Experience with cloud-based database solutions and services on platforms such as AWS, Azure, or Google Cloud.
Problem-Solving:
Strong analytical and problem-solving skills with a track record of resolving complex database-related issues.
Communication:
Excellent communication skills, with the ability to articulate technical concepts to both technical and non-technical stakeholders.
Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or a related field.