Data Flow

Below it the process of how ticketing data is ingested and processed to deliver clustered insights

  1. Ticket data enters the system from various sources like Leena AI Helpdesk, Servicenow, JIRA, csv dump etc
  2. The Data Sets (Data Sources) module cleans and transforms the raw data, as per format required.
  3. Processed data is fed into the Clusters & Cluster Snapshots module for cluster creation and updates.
  4. This new data undergoes NLP analysis to identify commonalities and trends. Herein, data cleaning, pre-processing, summarization and clusterization happens.
  5. Based on cluster insights, the Knowledge Article module generates self-service knowledge articles, which may even be updated with new content, upon user's requests.
  6. Then the generated article is saved in Knowledge Management Systems, wherein user may review, draft and approve the Knowledge Article as and when required.
  7. Once approved, Knowledge Article can be notified to the user and agent, mapped on the ticket for their ready-reference; on the open-tickets grouped to an associated cluster.