
Article
AI Applied: Turning data into intelligence with modern AI analytics
April 8, 2026 · Authored by Chris Wagner, Matt Pluster
Loading...
Artificial intelligence is rapidly transforming how organizations extract value from their data. Yet many AI initiatives struggle, not because the technology isn’t powerful, but because businesses fail to clearly understand the types of data they’re working with and how to prepare it effectively.
At the core of successful AI adoption is a simple but critical distinction: there are two fundamentally different types of AI analytics, numeric and text-based. Each requires its own approach, tools and architecture. Organizations that recognize and operationalize this distinction are far more likely to generate meaningful, scalable outcomes.
Numeric data: Structured, measurable and familiar
Numeric data represents the structured, transactional backbone of most organizations. It includes quantifiable information generated through day-to-day operations, such as:
This type of data typically resides in systems designed for operations — ERP platforms, accounting tools, e-commerce systems and other transactional databases.
Over the past two decades, organizations have developed mature practices for handling this data. These include:
AI enhances this ecosystem by enabling faster insights, predictive analytics, and automation. However, the underlying requirement remains the same: clean, well-structured and integrated data.
Despite advancements in AI, organizations cannot skip foundational data work. Siloed systems, inconsistent definitions and poor data quality still limit the effectiveness of any analytics initiative.
Text-based data: Unstructured but rich with insight
While numeric data tells you what is happening, text-based data often reveals why.
This category includes unstructured or semi-structured information such as:
Historically, this data has been difficult to analyze at scale. However, modern AI, particularly large language models, has unlocked the ability to interact with and extract insights from text in a meaningful way.
But leveraging text data effectively requires a different kind of infrastructure.
Unlike structured datasets, text data cannot simply be dropped into a database and queried. It must go through a series of preparation steps to make it usable for AI systems.
1. Data ingestion and storage
The first step is establishing a centralized location to store documents. This could be a cloud-based storage system or data lake designed to handle large volumes of files.
The goal is to create a “landing zone” where raw data can be collected before processing.
2. Converting to text
If the source data includes audio or other non-text formats (such as call recordings), it must first be converted into text through transcription.
This is essential because language models operate on text inputs. Without this step, valuable insights remain inaccessible.
3. Chunking and segmentation
Large documents are broken into smaller segments, or “chunks,” to make them easier for AI models to process.
This allows systems to retrieve and analyze only the most relevant portions of data, rather than scanning entire documents each time.
4. Metadata and context enrichment
Each chunk of data is enriched with metadata — additional information that provides context. This might include:
Metadata helps AI systems understand not just the content, but the context in which it exists.
5. Vectorization and storage
The processed data is then stored in specialized systems such as vector databases or graph databases. These structures enable efficient similarity search and relationship mapping.
Rather than relying on exact keyword matches, AI can now retrieve information based on meaning and context.
6. AI query layer
Finally, large language models are connected to this prepared dataset. Instead of querying raw documents, they reference the structured and enriched data store.
This approach improves both speed and accuracy, allowing users to ask natural language questions and receive relevant, contextual answers.
One of the most important takeaways is that AI success is fundamentally a data problem.
Even with advanced tools, organizations must still:
AI can accelerate parts of this process, such as generating code or suggesting transformations, but it does not eliminate the need for thoughtful design.
Skilled data professionals remain essential for guiding these systems and ensuring that outputs align with business reality.
Many AI initiatives fail not because of poor technology, but because of poor planning. Common pitfalls include:
To improve the chances of success, organizations should follow a more disciplined approach.
Start with a Clear Use Case
Define a specific business problem you want to solve. This could be improving customer support, accelerating proposal generation, or identifying operational inefficiencies.
A focused objective ensures that efforts remain aligned with measurable outcomes.
Identify the right data
Determine which type of data — numeric, text-based, or both — is required to support the use case.
This step helps guide decisions around architecture, tools and workflows.
Begin with a small dataset
Rather than attempting a full-scale rollout, start with a limited set of high-quality data.
This allows you to test assumptions, validate outputs and refine processes before scaling.
Build and test the pipeline
Develop the ingestion, processing and querying pipeline. Connect your AI model and evaluate the results.
At this stage, focus on learning and iteration rather than perfection.
Optimize and scale
Once the system produces reliable outputs, expand the dataset and enhance capabilities.
This may include:
Scaling should be gradual and informed by real-world performance.
AI is not a shortcut around data challenges; it is a force multiplier for organizations that already manage their data effectively.
By understanding the distinction between numeric and text-based analytics, and by investing in proper data preparation, businesses can unlock powerful new capabilities:
The organizations that succeed will be those that treat AI not as a magic solution, but as part of a broader data strategy.
Implementing AI successfully requires more than just tools; it demands a thoughtful approach to data, architecture and business alignment.
We help organizations:
Whether you’re just getting started or looking to scale existing efforts, Baker Tilly brings proven frameworks and hands-on expertise to accelerate your AI journey and ensure it delivers measurable results.