Pricing Analytics for eCommerce & RetailIndustry
Our customer is a market leader in the pricing analytics domain. Netsmartz has been helping them with product data matching since July 2022, starting with a team of 20 and ultimately to 60 Data Analysts. Our output data powers the client’s code for their end customer marketplaces to make better pricing decisions and respond quickly to competitor price changes.
About The Company
Our customer is a sophisticated B2B data intelligence and solutions company that turns data analytics into dollars for the retail and service industries.
Their innovative retail data and analytics platform allows businesses to have a holistic view of product pricing across the market to understand, effectively engage with consumers, and optimize profitability.
The company’s focus is to develop and continuously improvise its software for large e-Commerce marketplaces to manage different prices and promotions across the various channels and customer touch points.
Helmed by 1 Senior Project Manager, the team comprised 60 Data Analysts. The team was divided into 3 squads of 20 analysts each led by 1 Team Lead per squad.
Netsmartz’s Data Analysts had a daunting task ahead of them by going through 200,000+ products daily and matching them manually in terms of pricing and availability by finding the same products on the end customers’ competitor websites.
The information was to be punched into the in-house software provided by our customer, which had multiple layers and fields to segregate duplicate products, categories, package sizes, conditions of sale, etc. And then also keep a log of the products which went out-of-stock and wait for them to come back in stock so that the comparison was completed successfully.
The challenge was not only to train the 60 data analysts on every product but also to maintain a minimum 95% output data quality every day for every analyst, especially with submission timelines in Days or Weeks.
Netsmartz followed the following process to fulfill the requirements of the client:
Mapping & Identifying Appropriate Resources
The job required English language proficiency, fast typing skills, mindful web scrapping with an eye for detail, and resilience to work over the mammoth database repeatedly. We took interviews of 90+ candidates internally who were on the bench, to map their aptitude and skill set to do this kind of job.
Team Engagement & Continuous Training
The team was provided with a 1-week training, and soon after they were ready to work in the production environment. With every day the QC gave in more feedback which meant continuous updates in the SOPs and training modules.
Consistent Client Feedback & Action
Due to time zone differences and alignment of shifts, we were able to use this to our advantage to garner daily client feedback for improvement in product data matching attributes and the software-driven data input details.
Managed Scale Up of Teams
The first squad of 20 analysts went live after a week’s training in June 2022, they worked for a month before the next squad of twenty went through training in July for a week and thereafter the third squad of twenty analysts entered the system for training in the subsequent week. This meant a seamless team building of 60 Data Analysts in a matter of a month with a manageable scale-up with a predictable training curve for Squad 2 & 3 and minimalistic performance issues from the learnings of Squad 1 at the start of that data building pipeline.
The primary achievements of this project were:
- Continuous updating of modules meant the maintenance of logs of common mistakes and using those as references as the team scaled and proceeded with multiple product categories over time.
- We improved the per-analyst speed of matching an average of 50 products across 4-5 websites per day, to almost a range of 240-320 products across the same number of websites per day in three weeks in every working squad.
- The speed varied depending on the number of websites that were to be referred for matching product data, the most cumbersome task being matching every product across 40+ websites daily. By the end of it, we could clock 15-20 products per analyst per day for matching to be done across 40 websites per product.
- We improved the quality score from 68-75% to an average of 95% in a matter of three weeks for each of the working squads.
- Ultimately, we managed to push through manual data matching of 200,000+ products across 100+ websites in a matter of 4 months.