Information Technology & Engineering
- Recognized leader: Rated #1 Prepaid Wireless Provider in the U.S.
- Technology driven: Opportunity to work with state-of-the-art technology
- Teamwork: A supportive team environment that thrives on innovation.
- Culture: An entrepreneurial focus, where ownership and ingenuity are expected.
- Benefits: Excellent health benefits, Matching 401K, and education reimbursement.
What you will do:
The position is responsible for constructing and optimizing our data lake and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. Incumbent is expected to be experienced as a data pipeline builder and a data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data-analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Individual must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. The incumbent will stage and make the data ready for Data Science, Analytics, Artificial Intelligence and Machine Learning purposes.
The position is responsible for data analysis, trending & root-cause-identification of customer impacting issues pertaining to customer impacting systems across all the enterprise on all channels for all brands. Individual must be very strong in logical & analytical thinking with experience in SQL, Oracle SQL (other SQLs desired), shell script with attention to detail. Ability to understand data patterns from large sets of structured and unstructured data. Monitor success & failure rates on all channels for all key provisioning process such as Activation, Reactivation, Purchases, Redemptions, Ports and Upgrades.
Individual must be able to work well with others in a team environment and across departments. The incumbent should mentor junior staff in business knowledge and technical skills.
- Build data streams to ingest, load, transform, group, logically join and assemble data ready for data analysis / analytics/ reporting / next best action / next-best-offer, build data streams to ingest, load, transform, group, logically join and assemble data ready for data analysis / analytics/ reporting / next best action / next best offer.
- Pipeline data using Cloud or on-premises technologies: AWS Big Data Services, Hadoop, HDFS, AI/Deep Learning API’s, SQL/NOSQL, Unstructured Databases, etc.
- Responsible to write infra structure as code using Terraform
- Design/Implement QA framework within the Data-Lake. Design and implement test strategies, write test cases, design/implement test automation
- Responsible for maintaining integrity between multiple databases, including but not limiting to, CLARIFY, BI, BRM, OFS (Inbound/Outbound Process) and MAX.
- Work on T-shaped skills across Data Lake/Data Pipeline to take data across from the source all the way to the consumption layer.
- Maintain knowledge and proficiency of current and upcoming hardware/software technologies. Mentor junior staff in ramping up analytical and technical skills.
- A bachelor’s degree from an accredited college in Computer Science or equivalent.
- Strong knowledge of AWS Cloud Systems.
- Strong knowledge of AWS data related services (DMS, Glue, EMR, S3, Athena, Lambda, Redshift, DynamoDB, KMS).
- Strong knowledge of Python and Pyspark (Hive).
- Strong knowledge of Unix/AIX and Windows operating systems, standard concepts, practices, and procedures within the relational database field.
- Strong database/relational/non-relational concepts required.
- Strong analytical and problem solving skills.
- Hadoop, Kafka, Spark, Scala (preferred).
- Must have 5+ years of Data Sytems/Warehouse/Lakes/Equivalent sytem experience with multiple OS platforms
- Must have 3 + years of Python and Pyspark.
- Must have 3 + years of AWS or alternative cloud systems.
- Must possess or develop business knowledge of how customer transactions reflect the business logic that drive the existing or future code.
- Must possess or develop ability to converse with the business, development, operations, carriers, vendors, etc.
- Shell scripting, SQL Stored Procedures IBM/other databases and JAVA skills are desired.
- Strong experience of different architectural components comprising the middleware is required.
- Snowflake 1+ years of experience in ingesting, processing and analyzing data.
- 1+ year of Matallion/IBM Data Stage or equivalent ETL tool.
Tracfone Wireless is an Equal Employment Opportunity employer. We embrace diversity and do not discriminate based on race, religion, color, national origin, sex, sexual orientation,