「Kling AI」API Specification
Update Time | Update Notes |
2025.3.25 | ○Selecting Face as reference: you can keep the face consistent with the reference image, and use prompt to change other components like the outfit and the background; great for single-character consistency across different looks ○Selecting Subject as reference: you can set the entire subject as reference, and even fine-tune the reference level on the face and the subject; great for keeping the appearance of the character consistent across different scenes |
2025.3.18 | ○Launch of "Single Image Effects": fuzzyfuzzy ○Includes create task, query task (single), and query task (list) interfaces |
2025.3.12 | ○Launch of "Single Image Effects": 2 types available, squish, expansion ○Includes create task, query task (single), and query task (list) interfaces ○V1.5 Model PRO mode supports start/end frame, end frame, motion brush, camera control(simple only) ○V1.6 Model PRO mode supports start/end frame ○Supports lip-syncing for any 1080p or 720p video within 10s ○Added 8 new Chinese and English tones that can be used directly to dub lip-sync videos ○Improved aesthetics: better composition and lighting, especially more aesthetically pleasing in portraits ○Improved image quality: enhanced fidelity in details, more natural rendering, and stronger tonal contrast ○Support 21:9 aspect ratio |
2025.3.5 | 【Video Generation】New Feature: Creative Video Effects ○Launch of "Dual-character Effects": 3 types available, "hug", "kiss", and "heart_gesture". ○Includes create task, query task (single), and query task (list) interfaces ⭐ Compared to the general video generation API, the video effects API provides more flexible calling parameters and integrates pre- and post-processing functionalities tailored for special effects (e.g., dual-character effects). For example, it allows users to input two portrait images, automatically stitches them into a single composite image, and generates the video using the stitched image. This makes the API more flexible and efficient. |
2025.2.14 | 【Image Generation】The "model" field has been changed to "model_name" ❗ Please note that in order to maintain naming consistency, the original model field has been changed to model_name, so in the future, please use this field to specify the version of the model that needs to be called. ●At the same time, we keep the behavior forward-compatible, if you continue to use the original model field, it will not have any impact on the interface call, there will not be any exception, which is equivalent to the default behavior when model_name is empty (i.e., call the V1 model). |
2025.1.7 | ●Supports text-to-video STD mode, image-to-video STD mode, and image-to-video PRO mode ●Currently does not support tail frame, motion brushes, camera control, and other control features ❗ Please note that in order to maintain naming consistency, the original model field has been changed to model_name, so in the future, please use this field to specify the version of the model that needs to be called. ●At the same time, we keep the behavior forward-compatible, if you continue to use the original model field, it will not have any impact on the interface call, there will not be any exception, which is equivalent to the default behavior when model_name is empty (i.e., call the V1 model). |
2024.12.30 | 【Virtual Try-On】New V1.5 model ●V1.5 model is a comprehensive upgrade of V1.0 model ●V1.5 model supports single clothing (upper, lower, and dress dress) try-on, as well as "upper + lower" combination try-on |
2024.12.23 | 【Video Generation】 New Feature: Lip-Sync ●Videos generated by the Kling V1.0 model and Kling V1.5 model support lip-sync as long as the video meets the facial requirements ●Includes create task, query task (single), and query task (list) interfaces |
2024.12.9 | 【Video Generation】Kling V1.5 Std Model Now Open for Video Generation: Image-to-Video Functionality Enabled, Text-to-Video Unsupported ●Supports standard mode ●Tail frame control is not supported ●All other parameters are supported ❗ Please note that in order to maintain naming consistency, the original model field has been changed to model_name, so in the future, please use this field to specify the version of the model that needs to be called. ●At the same time, we keep the behavior forward-compatible, if you continue to use the original model field, it will not have any impact on the interface call, there will not be any exception, which is equivalent to the default behavior when model_name is empty (i.e., call the V1 model). |
2024.12.2 | 【Video Generation】Capability Map ●With multiple versions of the video generation model (V1, V1.5) and various plugin capabilities (e.g., camera control, start/end frame, motion brush, video extension, etc.), we have created a "Capability Map" to make it easier for everyone to visually check the availability of different versions and features. For details, please refer to the "3-0 Capability Map." |
2024.11.29 | 【Video Generation - ImageToVideo】New Feature: Motion Brush ●This feature is only supported in Standard Mode 5s and Professional Mode 5s for the V1.0 model, and is currently not available for the V1.5 model. |
2024.11.15 | 【Video Generation】Kling V1.5 Pro Model Now Open for Video Generation: Image-to-Video Functionality Enabled, Text-to-Video Unsupported ●Only supports professional mode ●Tail frame control is not supported ●All other parameters are supported 【Video Generation】 New Feature: Video Extension ●Supports extending videos generated by the V1.0 model directly, adding 4-5 seconds of video length per extension ●Includes create task, query task (single), and query task (list) interfaces 【Video Generation】Other Updates ● Added "external_task_id" field, allowing you to customize a task ID when creating a task, and query the video using this custom ID when needed ❗ Please note that in order to maintain naming consistency, the original model field has been changed to model_name, so in the future, please use this field to specify the version of the model that needs to be called. ●At the same time, we keep the behavior forward-compatible, if you continue to use the original model field, it will not have any impact on the interface call, there will not be any exception, which is equivalent to the default behavior when model_name is empty (i.e., call the V1 model). |
2024.10.30 | Added the "Query Resource Package List and Remaining Quantity" interface for your convenience. See "Section VI: Account Information Query". |
2024.10.25 | To clarify the storage duration of model-generated content (images/videos): ●To ensure information security, generated images/videos will be cleared after 30 days. Please make sure to save them promptly. |
2024.10.15 | Add a sample Java code for generating the API_Token. |
2024.9.19 | Video Generation ●When creating a task, the character limit for the prompt and negative_prompt in the request parameters has been updated to: cannot exceed 2500 characters. |
2024.9.19 | Officially supporting the "AI Virtual Try-on" related API (kolors-virtual-try-on). |
I. General Information
1.API Domain
💎
https://api.klingai.com
2.API Authentication
●Step-1:Obtain AccessKey + SecretKey
●Step-2:Every time you request the API, you need to generate an API Token according to the Fixed Encryption Method, Authorization = Bearer <API Token> in Requset Header
●Encryption Method:Follow JWT(Json Web Token, RFC 7519)standard
●JWT consists of three parts:Header、Payload、Signature
●Sample code (Python):
●Sample code (Java):
●Step-3: Use the API Token generated in Step 2 to assemble the Authorization and include it in the Request Header.