{"__v":1,"_id":"57dc23557b04b40e00a303f3","category":{"__v":0,"_id":"57dc1bbd3ed3450e00dc9ea7","project":"55de06fa57f7b20d0097636b","version":"55de06fa57f7b20d0097636e","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-09-16T16:20:13.682Z","from_sync":false,"order":7,"slug":"partner-attributes-api","title":"Data Marketplace API"},"parentDoc":null,"project":"55de06fa57f7b20d0097636b","user":"55de06e19db51a0d0064947d","version":{"__v":13,"_id":"55de06fa57f7b20d0097636e","project":"55de06fa57f7b20d0097636b","createdAt":"2015-08-26T18:35:38.642Z","releaseDate":"2015-08-26T18:35:38.642Z","categories":["55de06fb57f7b20d0097636f","55f1962e3936d52d00fb3c8f","55f1970339e3e8190068b2b8","55f1970d229b772300779a1f","55f1971cfd98c42300acc605","55f1d5c7fd98c42300acc69f","563cbfe4260dde0d00c5e9d4","5644cf437f1fff210078e690","57dc1bbd3ed3450e00dc9ea7","58a600a2243dd30f00fd8773","58ed1bdc068f780f00f64602","58f13b3a4f0ee50f00e24e81","58f173f792f9020f009cad16"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-16T16:52:37.770Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"The import endpoint allows you to instruct PushSpring to import a file containing Advertising Identifiers and associated Attributes from S3.  When you are provisioned by PushSpring, you will be given a set of Amazon Web Services S3 credentials and a bucket/path prefix.  You will then create the data files to be imported, upload each file to Amazon S3 and then post to the import API endpoint to tell PushSpring to queue the import job.  You can also request import logs via the API to check on status of each file you queue for import.\n\nPushSpring also supports ingestion from your own S3 bucket.  When you call the \"/partner/import\" endpoint, you can optionally supply a bucket and credentials to be used for the import.  If those are not supplied, PushSpring will attempt to ingest from the bucket and path we allocated to you during partner setup.\n\nImports are queued and will complete in the order submitted. \n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Endpoint\"\n}\n[/block]\nhttps://api.pushspring.com/v1/marketplace/import\n[block:api-header]\n{\n  \"type\": \"get\",\n  \"title\": \"List import logs\"\n}\n[/block]\nLists the last 100 import jobs in reverse order of creation.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <APIKEY>\\\" \\\"https://api.pushspring.com/v1/marketplace/import\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n**Response** \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"[  \\n   {  \\n      \\\"import_log_id\\\":3,\\n      \\\"file_path\\\":\\\"export1.csv\\\",\\n      \\\"path_type\\\": 0,\\n      \\\"key_type\\\": 0,\\n      \\\"status\\\":0,\\n      \\\"unique_devices\\\":\\\"1234\\\",\\n      \\\"total_rows\\\":\\\"1300\\\",\\n      \\\"errors\\\":null,\\n      \\\"created_at\\\":\\\"2016-09-16T15:49:01.437Z\\\",\\n      \\\"updated_at\\\":\\\"2016-09-16T15:49:01.437Z\\\"\\n   },\\n   {\\n      \\\"import_log_id\\\":4,\\n      \\\"file_path\\\":\\\"export2.csv\\\",\\n      \\\"path_type\\\": 0,\\n      \\\"key_type\\\": 1,\\n      \\\"status\\\":0,\\n      \\\"unique_devices\\\":\\\"4321\\\",\\n      \\\"total_rows\\\":\\\"5000\\\",\\n      \\\"errors\\\":null,\\n      \\\"created_at\\\":\\\"2016-09-1T5:49:01.437Z\\\",\\n      \\\"updated_at\\\":\\\"2016-09-1T5:49:01.437Z\\\"     \\n   }\\n]\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"get\",\n  \"title\": \"Get import log\"\n}\n[/block]\nGets a single import log entry\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X GET -H \\\"Authorization: Bearer <APIKEY>\\\"  \\\"https://api.pushspring.com/v1/marketplace/import/3\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n**Response** \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{  \\n  \\\"import_log_id\\\":3,\\n  \\\"file_path\\\":\\\"export1.csv\\\",\\n  \\\"path_type\\\": 0,\\n  \\\"key_type\\\": 1,\\n  \\\"status\\\":0,\\n  \\\"unique_devices\\\":\\\"1234\\\",\\n  \\\"total_rows\\\":\\\"1300\\\",\\n  \\\"errors\\\":null,\\n  \\\"created_at\\\":\\\"2016-09-16T15:49:01.437Z\\\",\\n  \\\"updated_at\\\":\\\"2016-09-16T15:49:01.437Z\\\"\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"post\",\n  \"title\": \"Import\"\n}\n[/block]\nQueues a file for import.\n\n**Parameters** \n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Field\",\n    \"h-1\": \"Required\",\n    \"h-2\": \"Type\",\n    \"h-3\": \"Description\",\n    \"0-0\": \"file_path\",\n    \"0-1\": \"Y\",\n    \"0-2\": \"string\",\n    \"0-3\": \"A relative file path.  Do not include s3://import-pushspring-com or your issued path prefix.  If you uploaded a file to s3://import-pushspring-com/<partnerid>/export.csv the relative path would be \\\"export.csv\\\".\\n\\nIf you specify a path_type of 1 below this should be a absolute path of the form s3://<bucket>/<path>.\\n\\nThis can also represent a path prefix.  If a path prefix is used all files under that prefix will be loaded.  If you need to load multiple files per day this method is recommended over individual import calls.\",\n    \"1-0\": \"key_type\",\n    \"1-1\": \"Y\",\n    \"1-2\": \"integer\",\n    \"1-3\": \"Indicates the types of keys in file that is imported.  \\n\\n0 = PushSpring assigned attribute_id\\n1 = partner_foreign_key associated with attribute\\n\\nMake sure the partner_foreign_key is associated with an attribute before importing the file.\",\n    \"2-0\": \"path_type\",\n    \"2-1\": \"N\",\n    \"2-2\": \"integer\",\n    \"2-3\": \"Indicates the type of import.  \\n\\nIf not specified defaults to PushSpring S3 bucket. \\n\\n0 - PushSpring S3 bucket\\n1 - Partner-owned S3 bucket\\n\\nIf a partner-owned bucket is specified then credentials is a required field.\",\n    \"4-0\": \"credentials\",\n    \"4-1\": \"N\",\n    \"4-2\": \"object\",\n    \"4-3\": \"For a Partner-Owned S3 bucket credentials should contain:\\n\\n{\\n   \\\"accessKeyId\\\": \\\"XXXXX\\\",\\n   \\\"secretAccessKey\\\":\\\"YYYY\\\",\\n   \\\"region\\\":\\\"ZZZZZ\\\"\\n}\\n\\nRegion should be whatever AWS region your bucket is in, e.g. 'us-west-2'\",\n    \"3-0\": \"compressed\",\n    \"3-1\": \"N\",\n    \"3-2\": \"boolean\",\n    \"3-3\": \"If not specified and path_type includes a single file we will look at the file extension and if it is \\\".gz\\\" we will assume the file is compressed.\\n\\nIf you are using a path prefix for the file_path you must specify this as there is no file extension to look at to determine if the file is compressed.\\n\\nDefaults to false.\"\n  },\n  \"cols\": 4,\n  \"rows\": 5\n}\n[/block]\n**Sample** \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"curl -X POST -H \\\"Authorization: Bearer <APIKEY>\\\" \\\"https://api.pushspring.com/partner/import\\\" -H \\\"Content-Type: application/json\\\"\\n-d '{\\n\\t\\\"file_path\\\":\\\"export.csv\\\",\\n  \\\"key_type\\\":0\\n}' \\\"https://api.pushspring.com/v1/marketplace/import\\\"\",\n      \"language\": \"curl\"\n    }\n  ]\n}\n[/block]\n**Response** \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"import_log_id\\\":4\\n}\",\n      \"language\": \"json\"\n    }\n  ]\n}\n[/block]","excerpt":"","slug":"import","type":"basic","title":"Import"}
The import endpoint allows you to instruct PushSpring to import a file containing Advertising Identifiers and associated Attributes from S3. When you are provisioned by PushSpring, you will be given a set of Amazon Web Services S3 credentials and a bucket/path prefix. You will then create the data files to be imported, upload each file to Amazon S3 and then post to the import API endpoint to tell PushSpring to queue the import job. You can also request import logs via the API to check on status of each file you queue for import. PushSpring also supports ingestion from your own S3 bucket. When you call the "/partner/import" endpoint, you can optionally supply a bucket and credentials to be used for the import. If those are not supplied, PushSpring will attempt to ingest from the bucket and path we allocated to you during partner setup. Imports are queued and will complete in the order submitted. [block:api-header] { "type": "basic", "title": "Endpoint" } [/block] https://api.pushspring.com/v1/marketplace/import [block:api-header] { "type": "get", "title": "List import logs" } [/block] Lists the last 100 import jobs in reverse order of creation. [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <APIKEY>\" \"https://api.pushspring.com/v1/marketplace/import\"", "language": "curl" } ] } [/block] **Response** [block:code] { "codes": [ { "code": "[ \n { \n \"import_log_id\":3,\n \"file_path\":\"export1.csv\",\n \"path_type\": 0,\n \"key_type\": 0,\n \"status\":0,\n \"unique_devices\":\"1234\",\n \"total_rows\":\"1300\",\n \"errors\":null,\n \"created_at\":\"2016-09-16T15:49:01.437Z\",\n \"updated_at\":\"2016-09-16T15:49:01.437Z\"\n },\n {\n \"import_log_id\":4,\n \"file_path\":\"export2.csv\",\n \"path_type\": 0,\n \"key_type\": 1,\n \"status\":0,\n \"unique_devices\":\"4321\",\n \"total_rows\":\"5000\",\n \"errors\":null,\n \"created_at\":\"2016-09-1T5:49:01.437Z\",\n \"updated_at\":\"2016-09-1T5:49:01.437Z\" \n }\n]", "language": "json" } ] } [/block] [block:api-header] { "type": "get", "title": "Get import log" } [/block] Gets a single import log entry [block:code] { "codes": [ { "code": "curl -X GET -H \"Authorization: Bearer <APIKEY>\" \"https://api.pushspring.com/v1/marketplace/import/3\"", "language": "curl" } ] } [/block] **Response** [block:code] { "codes": [ { "code": "{ \n \"import_log_id\":3,\n \"file_path\":\"export1.csv\",\n \"path_type\": 0,\n \"key_type\": 1,\n \"status\":0,\n \"unique_devices\":\"1234\",\n \"total_rows\":\"1300\",\n \"errors\":null,\n \"created_at\":\"2016-09-16T15:49:01.437Z\",\n \"updated_at\":\"2016-09-16T15:49:01.437Z\"\n}", "language": "json" } ] } [/block] [block:api-header] { "type": "post", "title": "Import" } [/block] Queues a file for import. **Parameters** [block:parameters] { "data": { "h-0": "Field", "h-1": "Required", "h-2": "Type", "h-3": "Description", "0-0": "file_path", "0-1": "Y", "0-2": "string", "0-3": "A relative file path. Do not include s3://import-pushspring-com or your issued path prefix. If you uploaded a file to s3://import-pushspring-com/<partnerid>/export.csv the relative path would be \"export.csv\".\n\nIf you specify a path_type of 1 below this should be a absolute path of the form s3://<bucket>/<path>.\n\nThis can also represent a path prefix. If a path prefix is used all files under that prefix will be loaded. If you need to load multiple files per day this method is recommended over individual import calls.", "1-0": "key_type", "1-1": "Y", "1-2": "integer", "1-3": "Indicates the types of keys in file that is imported. \n\n0 = PushSpring assigned attribute_id\n1 = partner_foreign_key associated with attribute\n\nMake sure the partner_foreign_key is associated with an attribute before importing the file.", "2-0": "path_type", "2-1": "N", "2-2": "integer", "2-3": "Indicates the type of import. \n\nIf not specified defaults to PushSpring S3 bucket. \n\n0 - PushSpring S3 bucket\n1 - Partner-owned S3 bucket\n\nIf a partner-owned bucket is specified then credentials is a required field.", "4-0": "credentials", "4-1": "N", "4-2": "object", "4-3": "For a Partner-Owned S3 bucket credentials should contain:\n\n{\n \"accessKeyId\": \"XXXXX\",\n \"secretAccessKey\":\"YYYY\",\n \"region\":\"ZZZZZ\"\n}\n\nRegion should be whatever AWS region your bucket is in, e.g. 'us-west-2'", "3-0": "compressed", "3-1": "N", "3-2": "boolean", "3-3": "If not specified and path_type includes a single file we will look at the file extension and if it is \".gz\" we will assume the file is compressed.\n\nIf you are using a path prefix for the file_path you must specify this as there is no file extension to look at to determine if the file is compressed.\n\nDefaults to false." }, "cols": 4, "rows": 5 } [/block] **Sample** [block:code] { "codes": [ { "code": "curl -X POST -H \"Authorization: Bearer <APIKEY>\" \"https://api.pushspring.com/partner/import\" -H \"Content-Type: application/json\"\n-d '{\n\t\"file_path\":\"export.csv\",\n \"key_type\":0\n}' \"https://api.pushspring.com/v1/marketplace/import\"", "language": "curl" } ] } [/block] **Response** [block:code] { "codes": [ { "code": "{\n \"import_log_id\":4\n}", "language": "json" } ] } [/block]