Skip to main content
Version: Next

Customize

Summary

This plugin provides users the ability to:

  • Add/delete columns in domain layer tables
  • Insert values to certain columns with data extracted from some raw layer tables
  • Import data from CSV files(only issues, issue_commits, issue_repo_commits, sprints, issue_worklogs, issue_changelogs, qa_apis, qa_test_cases and qa_test_case_executions tables are supported)

NOTE: The names of columns added via this plugin must start with the prefix x_

For now, only the following six types are supported:

  • varchar(255)
  • text
  • bigint
  • float
  • timestamp
  • array

Sample Request

Trigger Data Extraction

To extract data, switch to Advanced Mode on the first step of creating a Blueprint and paste a JSON config as the following:

The example below demonstrates how to extract status name from the table _raw_jira_api_issues:

  1. For non-array types: Extract the status name from the _raw_jira_api_issues table and assign it to the x_test column in the issues table.
  2. For array types: Extract the status name from the _raw_jira_api_issues table, and create a new issue_custom_array_fields table containing issue_id, field_id, and value columns. This table has a one-to-many relationship with the issues table. issue_id is the id corresponding to the issue, x_test corresponds to the field_id column, and the value of x_test corresponds to the value column.

We leverage the package https://github.com/tidwall/gjson to extract value from the JSON. For the extraction syntax, please refer to this docs

  • table: domain layer table name
  • rawDataTable: raw layer table, from which we extract values by json path
  • rawDataParams: the filter to select records from the raw layer table (The value should be a string not an object)
  • mapping: the extraction rule; the key is the extension field name; the value is json path
[
[
{
"plugin":"customize",
"options":{
"transformationRules":[
{
"table":"issues",
"rawDataTable":"_raw_jira_api_issues",
"rawDataParams":"{\"ConnectionId\":1,\"BoardId\":8}",
"mapping":{
"x_test":"fields.status.name"
}
}
]
}
}
]
]

You can also trigger data extraction by making a POST request to /pipelines.

curl 'http://localhost:8080/pipelines' \
--header 'Content-Type: application/json' \
--data-raw '
{
"name": "extract fields",
"plan": [
[
{
"plugin": "customize",
"options": {
"transformationRules": [
{
"table": "issues",
"rawDataTable": "_raw_jira_api_issues",
"rawDataParams": "{\"ConnectionId\":1,\"BoardId\":8}",
"mapping": {
"x_test": "fields.status.name"
}
}
]
}
}
]
]
}
'

List Columns

Get all columns of the table issues

GET /plugins/customize/issues/fields

NOTE some fields are omitted in the following example response

[
{
"columnName": "id",
"displayName": "",
"dataType": "varchar(255)",
"description": ""
},
{
"columnName": "created_at",
"displayName": "",
"dataType": "datetime(3)",
"description": ""
},
{
"columnName": "x_time",
"displayName": "time",
"dataType": "timestamp",
"description": "test for time"
},
{
"columnName": "x_int",
"displayName": "bigint",
"dataType": "bigint",
"description": "test for int"
},
{
"columnName": "x_float",
"displayName": "float",
"dataType": "float",
"description": "test for float"
},
{
"columnName": "x_text",
"displayName": "text",
"dataType": "text",
"description": "test for text"
},
{
"columnName": "x_varchar",
"displayName": "varchar",
"dataType": "varchar(255)",
"description": "test for varchar"
}
]

Create a Customized Column

Create a new column x_abc with datatype varchar(255) for the table issues.

The value of columnName must start with x_ and consist of no more than 50 alphanumerics and underscores. The value of field dataType must be one of the following 5 types:

  • varchar(255)
  • text
  • bigint
  • float
  • timestamp

POST /plugins/customize/issues/fields

{
"columnName": "x_abc",
"displayName": "ABC",
"dataType": "varchar(255)",
"description": "test field"
}

Drop A Column

Drop the column x_text of the table issues

DELETE /plugins/customize/issues/fields/x_test

Upload issues.csv file

POST /plugins/customize/csvfiles/issues.csv

The HTTP Content-Type must be multipart/form-data, and the form should have four fields:

  • file: The CSV file to upload
  • boardId: It will be written to the id field of the boards table, the board_id field of board_issues, and the _raw_data_params field of issues
  • boardName: It will be written to the name field of the boards table
  • incremental: Whether to import incrementally (default: false)

Upload a CSV file and import it to the issues table via this API. There should be no extra fields in the file except the labels and sprint_ids fields, and if the field value is NULL, it should be NULL in the CSV instead of the empty string.

Note:

  • The sprint_ids field should contain comma-separated sprint IDs (e.g. "sprint1,sprint2")
  • These values will be automatically written to the sprint_issues table during import DevLake will parse the CSV file and store it in the issues table, where the labels are stored in the issue_labels table. If the boardId does not appear, a new record will be created in the boards table. The board_issues table will be updated at the same time as the import. The following is an issues.CSV file sample:
id_raw_data_paramsurlicon_urlissue_keytitledescriptionepic_keytypestatusoriginal_statusstory_pointresolution_datecreated_dateupdated_dateparent_issue_idpriorityoriginal_estimate_minutestime_spent_minutestime_remaining_minutescreator_idcreator_nameassignee_idassignee_nameseveritycomponentlead_time_minutesoriginal_projectoriginal_typex_intx_timex_varcharx_floatlabelssprint_ids
bitbucket:BitbucketIssue:1:1board789https://api.bitbucket.org/2.0/repositories/thenicetgp/lake/issues/11issue testbitbucket issues test for devlakeissueTODOnew0NULL2022-07-17 07:15:55.959+00:002022-07-17 09:11:42.656+00:00major000bitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpbitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpNULLNULLNULL102022-09-15 15:27:56world8NULLsprint1,sprint2
bitbucket:BitbucketIssue:1:10board789https://api.bitbucket.org/2.0/repositories/thenicetgp/lake/issues/1010issue test007issue test007issueTODOnew0NULL2022-08-12 13:43:00.783+00:002022-08-12 13:43:00.783+00:00trivial000bitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpbitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpNULLNULLNULL302022-09-15 15:27:56abc2456790hello worlds
bitbucket:BitbucketIssue:1:13board789https://api.bitbucket.org/2.0/repositories/thenicetgp/lake/issues/1313issue test010issue test010issueTODOnew0NULL2022-08-12 13:44:46.508+00:002022-08-12 13:44:46.508+00:00critical000bitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpNULLNULLNULL12022-09-15 15:27:56NULL0.00014NULL
bitbucket:BitbucketIssue:1:14board789https://api.bitbucket.org/2.0/repositories/thenicetgp/lake/issues/1414issue test011issue test011issueTODOnew0NULL2022-08-12 13:45:12.810+00:002022-08-12 13:45:12.810+00:00blocker000bitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpbitbucket:BitbucketAccount:1:62abf394192edb006fa0e8cftgpNULLNULLNULL415345684643512022-09-15 15:27:56NULLNULLlabel1,label2,label3

Upload issue_commits.csv file

POST /plugins/customize/csvfiles/issue_commits.csv

The Content-Type should be multipart/form-data, and the form should have two fields:

  • file: The CSV file
  • boardId: It will be written to the _raw_data_params field of issue_commits

The following is an issue_commits.CSV file sample:

issue_idcommit_sha
jira:JiraIssue:1:100638748a066cbaf67b15e86f2c636f9931347e987cf
jira:JiraIssue:1:10064e6bde456807818c5c78d7b265964d6d48b653af6
jira:JiraIssue:1:100658f91020bcf684c6ad07adfafa3d8a2f826686c42
jira:JiraIssue:1:100660dfe2e9ed88ad4e27f825d9b67d4d56ac983c5ef
jira:JiraIssue:1:1014507aa2ebed68e286dc51a7e0082031196a6135f74
jira:JiraIssue:1:10145d70d6687e06304d9b6e0cb32b3f8c0f0928400f7
jira:JiraIssue:1:10159d28785ff09229ac9e3c6734f0c97466ab00eb4da
jira:JiraIssue:1:102020ab12c4d4064003602edceed900d1456b6209894
jira:JiraIssue:1:10203980e9fe7bc3e22a0409f7241a024eaf9c53680dd

Upload issue_repo_commits.csv file

POST /plugins/customize/csvfiles/issue_repo_commits.csv

API Description

Upload issue_repo_commits.csv file to import issue-repo commit relationships into DevLake.

Request

  • Content-Type: multipart/form-data
  • Parameters:
  • boardId (required): The ID of the board
  • incremental (optional): Whether to import incrementally (default: false)
  • file (required): The CSV file to upload

Responses

  • 200: Success
  • 400: Bad Request
  • 500: Internal Server Error

CSV Format

The CSV file should contain the following columns:

issue_idrepo_urlcommit_shahostnamespacerepo_name
jira:JiraIssue:1:10063https://github.com/apache/devlake.git8748a066cbaf67b15e86f2c636f9931347e987cfgithub.comapachedevlake
jira:JiraIssue:1:10064https://github.com/apache/devlake.gite6bde456807818c5c78d7b265964d6d48b653af6github.comapachedevlake

Upload sprints.csv file

POST /plugins/customize/csvfiles/sprints.csv

The Content-Type should be multipart/form-data, and the form should have three fields:

  • file: The CSV file to upload
  • boardId: The ID of the board
  • incremental: Whether to import incrementally (default: false)

The following is a sprints.CSV file sample:

idurlstatusnamestart_dateended_datecompleted_date
sprint1https://jira.example.com/sprint/1ACTIVESprint 12022-01-01 00:00:002022-01-14 00:00:00NULL
sprint2https://jira.example.com/sprint/2CLOSEDSprint 22022-01-15 00:00:002022-01-28 00:00:002022-01-28 12:00:00

Upload issue_worklogs.csv file

POST /plugins/customize/csvfiles/issue_worklogs.csv

The Content-Type should be multipart/form-data, and the form should have three fields:

  • file: The CSV file to upload
  • boardId: The ID of the board
  • incremental: Whether to import incrementally (default: false)

The following is an issue_worklogs.CSV file sample:

idissue_idauthor_name (will create account record)time_spent_minutesstarted_datelogged_datecomment
1jira:JiraIssue:1:10063John Doe1202022-01-01 09:30:002022-01-01 10:00:00Initial investigation
2jira:JiraIssue:1:10064Jane Smith602022-01-02 14:00:002022-01-02 14:30:00Bug fixing

Upload issue_changelogs.csv file

POST /plugins/customize/csvfiles/issue_changelogs.csv

The Content-Type should be multipart/form-data, and the form should have three fields:

  • file: The CSV file to upload
  • boardId: The ID of the board
  • incremental: Whether to import incrementally (default: false)

The following is an issue_changelogs.CSV file sample:

idissue_idauthor_name (will create account record)field_nameoriginal_from_valueoriginal_to_valuecreated_date
1jira:JiraIssue:1:10063John DoestatusOpenIn Progress2022-01-01 09:00:00
2jira:JiraIssue:1:10063John DoestatusIn ProgressDone2022-01-03 17:00:00

Upload qa_apis.csv file

POST /plugins/customize/csvfiles/qa_apis.csv

The HTTP Content-Type must be multipart/form-data, and the form should have three fields:

  • file: The CSV file
  • qaProjectId: It will be used as qa_project_id for the imported data
  • incremental: Boolean value indicating whether this is an incremental update (true/false)

Upload a CSV file and import it to the qa_apis table via this API. The following fields are required:

Field NameData TypeDescription
idvarchar(500)Unique identifier for the API
namevarchar(255)API name
pathvarchar(255)API path
methodvarchar(255)HTTP method (GET, POST, etc.)
create_timetimestampCreation timestamp
creator_namevarchar(255)Creator name (will create a record in accounts table and write creator_id to qa_apis table)

qa_apis.csv sample:

idnamepathmethodcreate_timecreator_nameqa_project_id
qa:api:1:101Login API/api/v1/loginPOST2025-07-01 10:00:00tester1project101
qa:api:1:102User Info API/api/v1/user/{id}GET2025-07-01 11:30:00tester2project101
qa:api:1:103Logout API/api/v1/logoutPOSTNULLtester1project102

Upload qa_test_cases.csv file

POST /plugins/customize/csvfiles/qa_test_cases.csv

The HTTP Content-Type must be multipart/form-data, and the form should have four fields:

  • file: The CSV file
  • qaProjectId: (max length 500) Will be used as qa_project_id and will create/update a record in qa_projects table
  • qaProjectName: (max length 255) Will be written to the name field of qa_projects table. Together with qaProjectId, this will create a new record in qa_projects table if not exists.
  • incremental: Boolean value indicating whether this is an incremental update (true/false)

Upload a CSV file and import it to the qa_test_cases table via this API. The following fields are required:

Field NameData TypeDescription
idvarchar(500)Unique test case ID
namevarchar(255)Test case name
create_timetimestampCreation timestamp
creator_namevarchar(255)Creator name (will create a record in accounts table and write creator_id to qa_test_cases table)
typevarchar(255)Test case type, api or functional
qa_api_idvarchar(255)Related API ID, if type is api

qa_test_cases.csv sample:

idnamecreate_timecreator_nametypeqa_api_idqa_project_id
qa:case:1:201Login Test2025-07-02 09:00:00tester1apiqa:api:1:101project101
qa:case:1:202User Profile Test2025-07-02 10:30:00tester2apiqa:api:1:102project101
qa:case:1:203UI Navigation Test2025-07-02 11:45:00tester3functionalNULLproject102

Upload qa_test_case_executions.csv file

POST /plugins/customize/csvfiles/qa_test_case_executions.csv

The HTTP Content-Type must be multipart/form-data, and the form should have three fields:

  • file: The CSV file
  • qaProjectId: It will be used as qa_project_id for the imported data
  • incremental: Boolean value indicating whether this is an incremental update (true/false)

Upload a CSV file and import it to the qa_test_case_executions table via this API. The following fields are required:

Field NameData TypeDescription
idvarchar(500)Unique execution ID
qa_test_case_idvarchar(255)Related test case ID
create_timetimestampCreation timestamp
start_timetimestampTest execution start time
finish_timetimestampTest execution finish time
creator_namevarchar(255)Creator name (will create a record in accounts table and write creator_id to qa_test_case_executions table)
statusvarchar(255)Execution status (PENDING, IN_PROGRESS, SUCCESS, FAILED)

qa_test_case_executions.csv sample:

idqa_test_case_idcreate_timestart_timefinish_timecreator_namestatusqa_project_id
qa:exec:1:301qa:case:1:2012025-07-03 14:00:002025-07-03 14:05:002025-07-03 14:15:00tester1SUCCESSproject101
qa:exec:1:302qa:case:1:2022025-07-03 15:30:002025-07-03 15:35:00NULLtester2IN_PROGRESSproject101
qa:exec:1:303qa:case:1:2032025-07-04 09:00:00NULLNULLtester3PENDINGproject102