CSV Upload

Lexsis AI supports uploading feedback data sets via CSV or Excel files. This allows you to bulk import historical feedback or sync data from systems that don't have direct API integrations.

Overview

CSV upload enables you to:

  • Upload large batches of feedback at once

  • Import historical feedback data

  • Sync data from systems without API integrations

  • Include rich metadata for better classification

File Format Requirements

  • File types: CSV (.csv)

  • Encoding: UTF-8 (Latin-1 is also accepted as a fallback)

  • Delimiter: Comma (,)

  • Header row: First row must contain column names

  • Maximum file size: 50 MB

  • Maximum rows: 25,000 rows per file

  • Maximum columns: 50 columns per file

Required Columns

Only the following column is required:

Column Name
Type
Description

raw_content

string

The raw text feedback that you'd like Lexsis AI to process

Tip: You can also use the alias content instead of raw_content.

Example (Minimum Required)

The following columns are highly recommended for best results. If not provided, they will be auto-generated:

Column Name
Type
Description
Default if missing

submitted_at

datetime

The date and time that the feedback was submitted. Accepts ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ) or date format (YYYY-MM-DD)

Current date/time at upload

source_identifier

string

A unique identifier for the feedback submission (e.g., review ID, ticket ID). Must be unique within your tenant

Auto-generated unique identifier

Column Aliases

For convenience, the following aliases are supported. You can use either the alias or the standard column name in your CSV header:

Alias
Maps to

content

raw_content

date

submitted_at

id

source_identifier

Example Using Aliases

Optional Columns - Core Submission Fields

Column Name
Type
Description

source_id

UUID

UUID of the source integration. If not provided, a default source will be used

title

string

Optional title for the feedback submission

Optional Columns - Metadata Fields

All metadata columns map directly to the related_metadata object in the submission. Column names must match the exact field names from the data model (case-sensitive).

Customer/User Information

Column Name
Type
Description

customer_id

string

Unique identifier for the customer/user

customer_name

string

Customer/user name

customer_email

string

Customer email address

customer_segment

string

Customer segment. Must be one of: Enterprise, SMB, Free, Trial

customer_value

float

Monetary value of customer (MRR, LTV, etc.). Used for priority calculation

Business/Organization Information

Column Name
Type
Description

business_name

string

Business/organization name

business_id

string

Unique identifier for business/organization

Platform/Product Information

Column Name
Type
Description

platform

string

Platform where feedback was submitted. Must be one of: iOS, Android, Web, Desktop, Mobile, Tablet

app_version

string

Application version (e.g., 4.94.0)

product_version

string

Product version

review_created_version

string

App version when review was created

Customer Journey Information

Column Name
Type
Description

account_age_days

integer

Number of days since account creation. Used for journey stage inference

usage_frequency

string

Frequency of product usage. Must be one of: Daily, Weekly, Monthly, Rarely, Never

subscription_status

string

Subscription status. Must be one of: Active, Trial, Expired, Cancelled

Rating/Score Information

Column Name
Type
Description

score

integer

Rating score (1-5 stars)

thumbs_up_count

integer

Number of thumbs up/helpful votes

Location/Geography

Column Name
Type
Description

location

string

Geographic location (country, region, etc.)

country

string

Country code (ISO 3166-1 alpha-2, e.g., us, gb)

language

string

Language code (ISO 639-1, e.g., en, es)

Response/Reply Information

Column Name
Type
Description

reply_content

string

Response/reply content from business

replied_at

datetime

Timestamp when reply was posted (ISO 8601 format)

Custom Fields

If your data set includes custom fields that don't map to the standard metadata fields, you can include them in your CSV. The uploader allows you to:

  1. Select any additional columns to bring in as custom fields

  2. Custom fields are preserved for reference and can be used in filtering and analysis

Example CSV Files

Minimal Example (Required Column Only)

Example with Aliases

Example with Metadata

Example with Journey Information

Example with Custom Fields

Column Mapping

Core Fields

CSV columns map directly to submission fields:

  • raw_content (or content) → raw_content (submission field)

  • submitted_at (or date) → submitted_at (submission field, auto-generated if missing)

  • source_identifier (or id) → source_identifier (submission field, auto-generated if missing)

  • source_idsource_id (submission field, optional)

Important Notes

Column Name Requirements

  • Case-sensitive: Column names must match exactly (e.g., customer_id not Customer_ID or customerId)

  • Exact spelling: Column names must match the data model field names exactly (or use a supported alias)

  • No spaces: Use underscores, not spaces (e.g., customer_name not customer name)

  • No duplicates: Each column name must appear only once in the header row

Data Type Validation

  • Dates: Use ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ) or date format (YYYY-MM-DD)

  • Integers: Must be valid integers (e.g., score must be 1-5)

  • Floats: Must be valid decimal numbers (e.g., customer_value must be a number)

  • Enums: Must match exact values (e.g., platform must be one of the allowed values)

Unique Identifiers

The source_identifier must be unique within your tenant. Duplicate identifiers will cause those rows to be skipped.

Upload Limits

  • File size: Maximum 50 MB per file

  • Row limit: Maximum 25,000 rows per file

  • Column limit: Maximum 50 columns per file

  • Processing time: Typically 1-5 minutes per 1,000 rows, depending on metadata complexity

Processing

Once uploaded successfully:

  1. Validation: CSV is validated for format and data integrity

  2. Import: Data is imported into Lexsis AI

  3. Processing: Submissions are automatically processed through the pipeline (if auto-processing is enabled):

    • Segmentation

    • Classification

    • Bucketing

  4. Completion: You'll receive a notification when processing is complete

Next Steps

  • Submit Feedback via REST API - Submit feedback programmatically

  • API Overview - Explore other API endpoints

  • Getting Started Guide - Set up your account

Last updated