API Integration

TikTok Posting API for Queue-Safe Multi-Account Distribution

Ship TikTok posting workflows with explicit status tracking, retry-safe orchestration, and moderation-aware handling for production-scale teams.

Problem

At scale, distribution fails where posting is still manual: upload/publish actions become fragile across many accounts, schedules, and moderation states.

Result with Ssemble

Ssemble supports content generation and status orchestration while your TikTok posting layer executes controlled publishes with idempotency, retries, and auditable job states.

Implementation essentials

Base URL: https://aiclipping.ssemble.com/api/v1 (+ TikTok Open API for publish)

Integration architecture (ingest → process → poll/webhook → publish)

1) Ingest

Store source URL/file URL + destination account + scheduleAt + idempotency key in your publisher job model.

2) Process

Call POST /shorts/create, persist requestId, and attach it to downstream posting jobs.

3) Poll / Webhook

Move jobs through processing/ready/failed states using GET /shorts/:id/status and event fallbacks.

4) Publish

Trigger TikTok publish with per-account concurrency caps, retry policy, and final status reconciliation.

Endpoint examples

  • POST /shorts/createGenerate short-form asset before posting
  • GET /shorts/:id/statusTrack processing lifecycle and failureReason
  • GET /shorts/:idRetrieve completed output metadata for posting

Auth guidance

Use X-API-Key server-side for Ssemble, and keep TikTok OAuth tokens isolated in your posting service. Rotate each key set independently by environment.

Rate limits and retries

Treat generation and posting as separate budgets. Serialize mutating calls per destination account and on 429 honor reset/retryAfter with exponential backoff + jitter.

Request example

POST /api/publish/tiktok
{
  "jobId": "job_01HT...",
  "requestId": "665a1b2c3d4e5f6a7b8c9d0e",
  "tiktokAccountId": "acct_kor_store_03",
  "scheduleAt": "2026-03-10T18:00:00Z",
  "idempotencyKey": "2f182b63-312f-4290-9222-7cf6f7f8f12d"
}

Response example

200 OK
{
  "jobId": "job_01HT...",
  "state": "publish_processing",
  "publishId": "v_inbox_file~v2.123456789",
  "nextCheckAt": "2026-03-10T18:00:30Z"
}

Error scenario example

429 Too Many Requests
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Per-account posting throttle reached",
    "details": { "retryAfter": "2026-03-10T18:01:00Z" }
  }
}

Versioning

Pin endpoint versions and log upstream version metadata per request so integration regressions are traceable.

Support

For incident triage, persist requestId + publish_id + account id + error.code + upstream log_id in one queryable record.

Quickstart

  1. Create SSEMBLE_API_KEY and store in backend secrets.
  2. Call POST /shorts/create and persist requestId in your job table.
  3. Poll GET /shorts/:id/status every 10 seconds until terminal state.
  4. On completed, post to TikTok through your server queue.
  5. Handle 429/5xx with bounded retries and stable idempotency keys.
Open TikTok posting docs

Start integration

3-step workflow

  1. Generate short-form assets and store requestId plus publish context in your job table.
  2. Track processing via status polling or webhooks and move jobs to ready/failed states deterministically.
  3. Publish to TikTok with per-account throttle control, idempotency keys, and reconciliation logs.

Why teams choose Ssemble

  • Built for multi-account distribution teams where queue reliability is more important than one-off success.
  • Separates rendering and posting concerns so failures are isolated and recoverable.
  • Supports moderation-aware state handling: upload completion is not always final public availability.

Next step

FAQ

Can we treat upload success as published?

Not always. Keep separate states for uploaded, submitted, publish_complete, and publicly_available to avoid false positives.

Which step must be idempotent?

At minimum, publish initiation. Use one stable idempotency key per job so retries do not create duplicates.

Polling or webhooks: which should we use?

Use webhooks as primary when available and keep polling as fallback for reconciliation and missed events.

How do we avoid 429 bursts across many accounts?

Shard queues by destination account and enforce per-account concurrency limits rather than one global burst queue.

What logs are mandatory for incident response?

Record requestId, publish_id, account id, status transitions, and error code to support deterministic debugging.

Does this guarantee moderation completion time?

No. Moderation windows vary, so production systems should treat final availability timing as asynchronous and variable.

Related resources