Data Flow
Below it the process of how ticketing data is ingested and processed to deliver clustered insights
- Ticket data enters the system from various sources like Leena AI Helpdesk, Servicenow, JIRA, csv dump etc
- The Data Sets (Data Sources) module cleans and transforms the raw data, as per format required.
- Processed data is fed into the Clusters & Cluster Snapshots module for cluster creation and updates.
- This new data undergoes NLP analysis to identify commonalities and trends. Herein, data cleaning, pre-processing, summarization and clusterization happens.
- Based on cluster insights, the Knowledge Article module generates self-service knowledge articles, which may even be updated with new content, upon user's requests.
- Then the generated article is saved in Knowledge Management Systems, wherein user may review, draft and approve the Knowledge Article as and when required.
- Once approved, Knowledge Article can be notified to the user and agent, mapped on the ticket for their ready-reference; on the open-tickets grouped to an associated cluster.
Updated 7 days ago
