Response Fields

This page explains the data returned by the LLM Scraper. Each field is briefly described, with an example where relevant, so teams can quickly understand and integrate the response.

Top-Level Fields

Field
Description
Example

traceID

Unique identifier for the request. Used for debugging, support, and internal tracking.

"traceID": "71f9e0b2-3a77-4ad0-

9a61-23b7e1bfa8d1"

timestamp

ISO-8601 timestamp indicating when the request was processed.

"timestamp": "2025-12-08

T11:39:19.269Z"

request_duration

End-to-end request duration in seconds (from request received to response returned).

"request_duration": 6.3

process_duration

Total internal processing time in seconds. This includes the full request lifecycle.

"process_duration": 6.4

scraper

Scraper name by LLM Engine

"scraper": "Perplexity"

llm_model

LLM Model used by provider

"llm_model": "gpt-5.4"

Note request_duration is included within process_duration. For example, if request_duration is 6.3 and process_duration is 6.4, the overall time experienced by the client is 6.4 seconds.


Response Object

The main parsed output from the LLM.


prompt

The original prompt sent to the LLM for this request. This field is always returned as part of the response for traceability and debugging.

Example

"prompt": "what is the best seo tips in 2025"

text_markdown

Markdown-formatted version of the response, preserving structure such as headings, lists, emphasis, and inline citations.

This field represents the primary readable output of the LLM, including formatting and citation markers as returned by the model.


Example


text (plain text)

Plain-text version of the response with formatting removed. Useful for storage, indexing, or systems that don’t support Markdown.

Example


citations_found

Boolean indicating whether the model output included citations.

Example


citations

Flat list of citation objects extracted from Perplexity.

Each citation includes:

  • id

  • title

  • url

  • section: "citations" or "more"

Example


inline_citations

Structured citations extracted directly from the main response text.

Each inline citation represents a specific source tied to an exact text span (anchor) in the generated answer.

Unlike citations, which are a flat list of sources, inline_citations show where and how each source is used inside the content.

Each inline citation includes:

  • id

  • title

  • url

  • text_anchor

Example

Notes

  • A single text span may have multiple inline citations

  • id corresponds to the same source in the citations array


follow_ups

Suggested follow-up questions generated based on the original query and response.

These represent related queries a user might ask next, helping to explore the topic further or refine the results.

Each follow-up is returned as a simple string.


Example


media_items

Structured media elements extracted from the response, such as images and videos displayed alongside the answer.

These items represent visual content returned by Perplexity, including top images, video results, and additional media related to the query.

Each media item includes:

  • type

  • title

  • url

  • source_url


Example

Last updated