Client Profile
The enterprise operates across healthcare, materials, and imaging, with activities spanning research and development, manufacturing, distribution, and after-sales support. With a worldwide footprint, the company works through a network of subsidiaries and a large workforce, serving customers across regions and industries.
Across these businesses, data flows through ERP, supply chain, and logistics systems that support finance close, inventory movement, and service operations. As reporting needs grew, teams needed a single view they could trust, without relying on manual extracts or conflicting definitions from system to system, especially at month-end.
Technical Challenges
Our client’s business teams relied on disparate systems, including Oracle ERP and SAP, for data storage and reporting. Their systems, although functioning well, lacked agility, scalability, and advanced visual capabilities. The major technical challenges were:
Data Silos
Oracle ERP data remained isolated, which limited cross-team reporting and slowed coordinated decision-making across functions.
Complex Data Extraction
Large datasets, complex schemas, and system dependencies made extraction slow and difficult to modify.
Performance Challenges
Direct ERP queries sometimes slowed core processing, particularly during periods of high reporting demand.
Limited Tech Familiarity
Migrating was a clear goal, but unfamiliarity with new tools added uncertainty to an already complex transition.
Our Solution
We migrated data from Oracle ERP and SAP into Snowflake and enabled reporting in Power BI through a structured ETL pipeline. During the migration, data was extracted, staged, and transformed using automated jobs and custom console applications, with controls in place to avoid disrupting operational systems.

The implementation supports multiple report types across accounting, logistics, and supply chain management. To keep the work predictable for business and support teams, we documented the processing flow through staging servers, scheduled tasks, and console applications, and defined step-level checks for validation, error handling, and load sequencing before data reached Power BI.
Building the foundation: Automated data ingestion and staging
One of the initial technical milestones was an automated ETL pipeline. The pipeline runs on a Windows Server environment and extracts Oracle ERP records on a scheduled, incremental basis. Files are logged and validated during each run, with exceptions flagged for review. Validated records are archived for audit, giving the client’s teams visibility into ingestion status across reporting cycles.

To keep reporting predictable, we defined cutoffs for each run and added checks for row counts, file naming, and load order. When something looked off, the job stopped early and logged the cause in plain terms for support teams. This reduced rework and avoided surprises at month-end.
Layered data architecture for performance, scalability, and governance
To keep the warehouse both performant and future-ready, we applied a structured medallion pattern with four successive layers:
- Bronze layer: Raw ingress
Automated jobs extract records from Oracle and load them into Snowflake with minimal transformation.
- Silver layer: Historical staging
The stage layer stores incremental loads, keeps previous data versions for audit purposes, and prepares data for modelling.
- Gold layer: Core warehouse
Snowflake stored procedures curate fact and dimension tables, optimizing them for analytics and ad hoc queries.
- Platinum layer: Presentation
Filtered views expose only the data required for reporting, allowing business analysts to access it directly through Power BI.

Snowflake data warehouse implementation
After data reaches the staging layer, it is moved into Snowflake using console applications based on PUT and GET commands. We designed a star schema across the warehouse layer, supported by more than sixty staging and warehouse tables.

We also tuned table design choices to match the heaviest reporting queries, so analysts were not waiting on long scans. Incremental load logic avoided full reloads, and reconciliation checks compared key totals between source and warehouse. This gave teams confidence that Snowflake reflected the numbers they signed off on.
Structured data transformation and governance
After data was loaded, defined rules were applied to clean and organize it so reports stayed consistent. We limited access to authorized users, protected sensitive information, and recorded system activity to meet security and compliance needs without slowing everyday reporting.

Access rules were aligned with actual user groups, and permissions were limited to what each team required for reporting. This kept sensitive data protected while maintaining day-to-day usability. Audit logs supported internal reviews, and sensitive fields were protected without blocking day-to-day analysis. We also documented data definitions so teams interpreted measures consistently.
Orchestrated data pipelines for seamless reporting
As native orchestration tools did not fully meet operational needs, we developed a custom pipeline management system. This system logs each ETL activity, from initial file movement through final transformation, giving the operations team clearer visibility and reducing troubleshooting time. As a result, overall reliability improved, and pipeline issues became easier to manage, as alerts surfaced load failures, data anomalies, and security-related events for prompt review and resolution.

In addition to logging, the framework captured detailed error information across extraction, transfer, loading, and transformation steps, with alerts notifying administrators of critical failures. Standard runbooks documented common failure patterns and fixes, reducing dependency on a few experts. Over time, issue resolution became faster and more predictable across reporting cycles.
Power BI enablement for dynamic business intelligence
With data stabilized and modeled in Snowflake, we delivered Power BI reporting for accounting, logistics, and supply chain teams. Accounting users reviewed GL activity, trial balances, and cost breakdowns with drill-through views that supported close work. Logistics teams tracked inventory movements and cutoffs without relying on offline extracts.

To balance freshness and performance, we used DirectQuery for time-sensitive views and Import mode for workloads requiring faster query response. Layouts followed department workflows, so users could move from summary to transaction detail. Access followed the same role model as the warehouse, keeping sensitive views limited to the right teams.
Business Goals and Measurable Outcomes
| Business objective | Business benefit delivered |
|---|---|
| Improve reporting reliability | 20+ reports standardized. Accounting, logistics, and supply chain departments now access consistent, ready-to-use reports. |
| Enable faster business decisions | Access to real-time data helps leaders act faster and with confidence. |
| Reduce manual data handling | Manual steps are replaced by automation so there’s a notable reduction in delays and errors. |
| Protect core operational systems | ERP performance remained stable by separating reporting activity from transaction processing. |
| Ensure secure access | Sensitive data stays protected with role-based access and encryption. |
| Support cross-functional visibility | Teams worked from shared reports built on centralized data across business functions. |
Tech Stack
- Source system
external data sources
- Staging and processing
- SnowSQL
- Cloud data warehouse
- Data transformation
- Snowflake Stored Procedures
- Medallion architecture (Bronze/Silver/Gold/Platinum layers)
- Reporting and visualization
Similar Case Studies
Salesforce customer portal implementation for Fujifilm Sonosite
Improved FTFR with Salesforce Field Service for a manufacturing company
Transforming field service and boosting business growth with Salesforce FSL
Connect Now
Our experts would be eager to hear you.
external data sources