
Yotta Tech Ports Inc. provided a comprehensive Customer 360 solution for one of our large clients in US, revolutionizing their approach to marketing campaigns, upsell and cross-sell opportunities, and customer data enrichment in front-end applications. This transformative solution empowered the organization to better understand and engage with their customers, driving significant improvements in their outreach and service delivery.
Challenges Faced Without a Customer 360 Solution
- Fragmented Customer Data: Customer data was scattered across multiple systems, making it difficult to get a unified view of the customer.
- Inconsistent Data Quality: Different systems had varying levels of data quality, leading to inaccuracies and inconsistencies in customer information.
- Limited Data Accessibility: Accessing customer data required navigating through multiple systems, which was time-consuming and inefficient.
- Inefficient Marketing Campaigns: Without a unified view of the customer, marketing campaigns were less targeted and less effective.
- Missed Upsell and Cross-Sell Opportunities: The lack of integrated customer data made it challenging to identify and act on upsell and cross-sell opportunities.
- Poor Customer Experience: Incomplete and inconsistent customer data led to a suboptimal customer experience, as customer interactions were not personalized or informed by a comprehensive understanding of the customer.
Solution Provided by Yotta Tech Ports Inc.
Identify Customer Data Sources: The initial challenge was in identifying customer data across numerous business applications that the client used for their day-to-day business. This involved interviewing various departments and business units to gain insights into the customer data that they use in their daily operations. This process led us to create an initial data catalog of ~175 business applications that either contained B2C customer data or B2B customer data or both. It was a revelation both for the client stakeholders and the team to see so many business applications being used on a daily basis with customer data, that have not been mastered yet. Some of the business applications were internally developed custom applications, while some were Custom Off the Shelf products. Some were heavily used and critical, while some were sparsely used, it was a wide variety to say the least.
Secure Access: The team worked with the client’s IT teams to gain access to these ~175 data sources. Some of the data sources were available via APIs, some through backend databases, and some of them were straight up files in a sftp location or in anther data store. For this we had to raise several access requests, get them approved with the business stakeholders, so that the IT teams can create access tokens with read access, database accounts with read access or provide read access to the file repositories.
Data Profiling: Once the team got their hands on the actual datasets, we began the data profiling exercise to extract the 5 Vs of big data – Volume, Variety, Velocity, Veracity, and Value. This is an important step in the whole process as the client stakeholders had never done an exercise to take a holistic view of all the customer data collected and owned by them over the decades, across these ~175 business applications. It was also important for them to understand if they needed all these ~175 disparate business applications or if applications serving a similar need/ purpose could be centralized, there by simplifying the application landscape. The following are some of the outcomes from the team’s data profiling exercise:
- Volume: Over 240 million B2C customer records and over 6 million B2B customer records were identified across ~175 business applications, highlighting the extensive reach and scale of the data.
- Variety: Data sources included Salesforce instances, Adobe products, Oracle DB, SQL Server DB, MySQL DB, REST APIs, a staggering number of CSV files, and text files. The B2B data ranged from large corporations to government agencies, showcasing the diversity of data types and sources.
- Velocity: Approximately 50,000 customer records are generated daily and must be ingested into the Customer 360 platform by the end of the day, emphasizing the need for timely and efficient data processing. These were not necessarily new customer records but could be pre-existing customer records from another business application.
- Veracity: Data quality varied across source systems. For example, CRM data was less up to date compared to order fulfillment system data, necessitating a focus on data accuracy and reliability.
- Value: Data value was determined based on business needs and urgency, primarily from the interviewing process, ensuring that the most critical data was prioritized for integration.
Technical Solution: Yotta Tech Ports Inc. team has experience implementing several technical solutions to ingest data from APIs, databases, and file systems into a central data repository. These include:
- Informatica Workflows: Used to ingest data from a variety of sources, providing robust data integration capabilities.
- AWS Lambda Functions: AWS service in which Python scripts are written to ingest data, offering serverless computing and scalability.
- AWS Glue: AWS service that can be used for data ingestion, enabling easy and cost-effective ETL processes.
- Airbyte: EL tool used for data ingestion, providing flexibility and ease of use.
- Mage.ai: Python scripts used for data ingestion, offering customization and control.
In consultation with the client IT team, we decided to go with the client’s existing technical stack which included:
- Informatica Power Center workflows to perform the data ingestion.
- Teradata as the central data repository to where the data shall be landed into a common format from ~175 business applications.
Each ingested record was tagged with a source system code and a transactional timestamp. Data ingestion was typically performed as a batch process based on the established frequency, usually every few hours.
Benefits to the Client
- Improved Data Accessibility: Enhanced access to customer data across various systems, enabling more informed decision-making and targeted marketing campaigns.
- Enhanced Data Quality and Consistency: Better data quality and consistency through profiling and secure access, leading to more accurate customer insights and improved customer experiences.
Conclusion
The data ingestion process from various source systems into a common landing layer was a critical first step in the Customer 360 solution. This foundational step set the stage for subsequent data standardization and enrichment, ensuring a comprehensive and accurate view of customer data. Stay tuned for the next blog post, where we will delve into the intricacies of data standardization and enrichment, and how it further enhances the Customer 360 solution.