deepomatic
API v0.7 Reference
  • API Reference
  • Authentication
  • Quick Start
  • Neural Networks
  • Recognition Specification
  • Recognition Version
  • Tasks
  • Common objects
  • Errors
  • API Reference

    The Deepomatic API is organized around REST. Our API has predictable, resource-oriented URLs, and uses HTTP response codes to indicate API errors. We use built-in HTTP features, like HTTP authentication and HTTP verbs, which are understood by off-the-shelf HTTP clients. We support cross-origin resource sharing, allowing you to interact securely with our API from a client-side web application (though you should never expose your secret admin API key in any public website's client-side code). JSON is returned by all API responses, including errors, although our API libraries convert responses to appropriate language-specific objects.

    IMPORTANT: The API only allow to send data with a Content-Type "application/json" or "multipart/form-data". If you try to send "application/x-www-form-urlencoded" you will receive a 415 status code.

    API clients

    Deepomatic API currently has one supported client in python.

    API Endpoint

    https://api.deepomatic.com/v0.7

    Authentication

    To access the API, you will need an APP-ID and an API-KEY. Please login on https://developers.deepomatic.com/dashboard#/account to retrieve your credentials. Make sure you use the adequate credentials.

    Authentication is done via HTTP headers. The X-APP-ID header identifies which application you are accessing, and the X-API-KEY header authenticates the endpoint.

    To check your credentials work, you can run:

    curl -s https://api.deepomatic.com/v0.7/accounts/me \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    print(client.Account.retrieve('me'))

    Quick Start

    Before diving in the details of the API, let's detail first some vocabulary points that we will use consistently through this whole documentation:

    Testing pre-trained models

    When following the link to the API below, you will be asked for login. Simply use the same email adress and password you used for the developer dashboard.

    Listing public models

    The first thing you may want to do is to try our pretrained demo image recognition models. There are currently six of them:

    To get a list of public recognition models, run:

    curl https://api.deepomatic.com/v0.7/recognition/public \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    for spec in client.RecognitionSpec.list(public=True):
    print(spec)

    Accessing the list of labels

    To access the list of labels for those classifiers, visit https://api.deepomatic.com/v0.7/recognition/public/{:model_name} by replacing {:model_name} with one of the IDs above.

    For example, you can try to visit: https://api.deepomatic.com/v0.7/recognition/public/street-v1.

    Please refer to the inference section for a complete description of the returned data.

    To access specifications of the model, including its output labels:

    MODEL_NAME="fashion-v4"
    curl https://api.deepomatic.com/v0.7/recognition/public/${MODEL_NAME} \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    MODEL_NAME="fashion-v4"
    spec = client.RecognitionSpec.retrieve(model_name)
    for label in spec['outputs'][0]['labels']['labels']:
    print("- {name} (id = {id})".format(name=label['name'], id=label['id']))

    Testing a model

    You can run a recognition query on a test image from an URL, a file path, binary data, or base64 encoded data. As our API is asynchronous, the inference endpoint returns a task ID. If you are trying the shell example, you might have to wait one second before trying the second curl command before the task completes.

    You can try your first recognition query from an URL by running:

    MODEL_NAME="fashion-v4"
    TASK=`curl https://api.deepomatic.com/v0.7/recognition/public/${MODEL_NAME}/inference \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d '{"inputs": [{"image": {"source": "https://static.deepomatic.com/resources/demos/api-clients/dog2.jpg"}}], "show_discarded": false}' \
    -H "Content-Type: application/json"`
    # The curl result will return a task ID that we use to actually get the result
    echo ${TASK}
    TASK=$(echo ${TASK} | sed "s/[^0-9]*\([0-9]*\)[^0-9]*/\1/")
    sleep 1
    curl https://api.deepomatic.com/v0.7/tasks/${TASK} \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    model = client.RecognitionSpec.retrieve("fashion-v4")
    url = "https://static.deepomatic.com/resources/demos/api-clients/dog2.jpg"
    model.inference(inputs=[deepomatic.ImageInput(url)], show_discarded=False)

    The result of this command will be made of a JSON dictionnary result with one outputs field. This field will have one element as our public networks only have one interesting output tensor of type labels. So the fun really begins by looking at the value of result['outputs][0]['labels']['predicted'] which is a list of object with the following fields:

    The output will be:

    {
    "outputs": [{
    "labels": {
    "predicted": [{
    "label_name": "sunglasses",
    "label_id": 9,
    "roi": {
    "region_id": 1,
    "bbox": {
    "xmin": 0.312604159,
    "ymin": 0.366485775,
    "ymax": 0.5318923,
    "xmax": 0.666821837
    }
    },
    "score": 0.990304172,
    "threshold": 0.347
    }],
    "discarded": []
    }
    }]
    }

    Preprocessing Examples

    Please refer to the documentation for an example of how to upload a network. This operation involves defining of input images should be preprocessed via the preprocessing field. We give below some examples so such field:

    Typical preprocessing for a Caffe classification network:

    {
    "inputs": [
    {
    "tensor_name": "data",
    "image": {
    "resize_type": "SQUASH",
    "data_type": "FLOAT32",
    "dimension_order": "NCHW",
    "pixel_scaling": 255.0,
    "mean_file": "imagenet_mean.binaryproto",
    "target_size": "224x224",
    "color_channels": "BGR"
    }
    }
    ],
    "batched_output": true
    }

    Typical preprocessing for a Caffe Faster-RCNN network:

    {
    "inputs": [
    {
    "tensor_name": "data",
    "image": {
    "resize_type": "NETWORK",
    "data_type": "FLOAT32",
    "dimension_order": "NCHW",
    "pixel_scaling": 255.0,
    "mean_file": "imagenet_mean.binaryproto",
    "target_size": "800",
    "color_channels": "BGR"
    }
    },
    {
    "tensor_name": "im_info",
    "constant": {
    "shape": [
    3
    ],
    "data": [
    "data.1",
    "data.2",
    1.0
    ]
    }
    }
    ],
    "batched_output": false
    }

    Typical preprocessing for a Tensorflow inception v3 network:

    {
    "inputs": [
    {
    "tensor_name": "map/TensorArrayStack/TensorArrayGatherV3:0",
    "image": {
    "resize_type": "CROP",
    "data_type": "FLOAT32",
    "dimension_order": "NHWC",
    "pixel_scaling": 2.0,
    "mean_file": "unitary_mean.npy",
    "target_size": "299x299",
    "color_channels": "BGR"
    }
    }
    ],
    "batched_output": true
    }

    The mean file "unitary_mean.npy" can be made this way:

    import numpy as np
    mean = np.ones((1, 1, 3)) # HWC setup with H = 1, W = 1 and C = 3
    with open('unitary_mean.npy', 'wb') as f:
    np.save(f, mean, allow_pickle=False)

    Typical preprocessing for a Tensorflow detection network:

    {
    "inputs": [
    {
    "tensor_name": "image_tensor:0",
    "image": {
    "resize_type": "NETWORK",
    "data_type": "UINT8",
    "dimension_order": "NHWC",
    "pixel_scaling": 255.0,
    "mean_file": "",
    "target_size": "500",
    "color_channels": "RGB"
    }
    }
    ],
    "batched_output": true
    }

    Spec Output Examples

    Please refer to the documentation for an example of how to create a recognition specification. This operation involves defining outputs of your algorithms. We give below some examples of this field:

    def generate_outputs(labels, algo):
    return [{
    "labels": {
    "roi": "BBOX" if algo == "detection" else "NONE",
    "exclusive": algo != "tagging",
    "labels": [{"id": i, "name": l} for (i, l) in enumerate(labels)]
    }
    }]
    # This generates `outputs` for classification (exclusive labels, softmax output)
    outputs = generate_outputs(['hot-dog', 'not hot-dog'], 'classification')
    # This generates `outputs` for tagging (non-exclusive labels, sigmoid output)
    outputs = generate_outputs(['is_reptile', 'is_lezard'], 'tagging')
    # This generates `outputs` for detection (exclusive labels)
    outputs = generate_outputs(['car'], 'detection')

    Post-processing Examples

    Please refer to the documentation for an example of how to create a recognition version. This operation involves the post_processings field which defines how the output of the network should be handled.

    In the post-processings proposed below, we omit the thresholds field on purpose: they will be set by default. The default value is:

    Typical post-processing for classification:

    {
    "classification": {
    "output_tensor": "inception_v3/logits/predictions"
    }
    }

    Typical post-processing for an anchored detection algorithm

    This is typically a post processing for a Caffe Faster-RCNN implementation:

    {
    "detection": {
    "anchored_output": {
    "anchors_tensor": "rois",
    "scores_tensor": "cls_prob",
    "offsets_tensor": "bbox_pred"
    },
    "discard_threshold": 0.025,
    "nms_threshold": 0.3,
    "normalize_wrt_tensor": "im_info"
    }
    }

    Typical post-processing for a direct output detection algorithm:

    {
    "detection": {
    "direct_output": {
    "boxes_tensor": "detection_boxes:0",
    "scores_tensor": "detection_scores:0",
    "classes_tensor": "detection_classes:0"
    },
    "discard_threshold": 0.025,
    "nms_threshold": 0.3,
    "normalize_wrt_tensor": ""
    }
    }

    Typical post-processing for a Yolo detection algorithm:

    {
    "detection": {
    "yolo_output": {
    "output_tensor": "import/output:0",
    "anchors": [1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071]
    },
    "discard_threshold": 0.025,
    "nms_threshold": 0.3,
    "normalize_wrt_tensor": ""
    }
    }

    Neural Networks

    The Network Object

    A neural network object describes how input data should be preprocessed to be able to perform a simple inference and get raw output features from any layer.

    Attribute Type Attributes Description
    id int read-only The ID of the neural network.
    name string A short name for your network.
    description string A longer description of your network.
    update_date string read-only Date time (ISO 8601 format) of the last update of the network.
    metadata object A JSON field containing any kind of information that you may find interesting to store.
    framework string immutable A string describing which framework to use for your network. Possible values are:
    • nv-caffe-0.x-mod: A version of Caffe modified by NVIDIA and deepomatic to add support for Faster-RCNN. Currently version 0.16.4
    • tensorflow-1.x: Tensorflow: currently version 1.4
    preprocessing object immutable A preprocessing object to describe how input data should be pre-processed. Once the network is created, you cannot modify this field anymore.
    task_id int read-only ID of the task containing the deployment status of the network.

    Preprocessing Object

    This object describes how data should be preprocessed for each input of the network.

    Attribute Type Description
    inputs array(object) A list of Input Preprocessing Object. The order matters as input data will be fed in the same order at inference time.
    batched_output bool Set this to True if your network cannot handle batches because the first dimension of the outputs of your network is not related to the batch size. This is typically the case for vanilla Faster-RCNN models.

    Input Preprocessing Object

    Attribute Type Description
    tensor_name string The name of the input tensor that this input will feed.
    image object An Image Preprocessing Object. Currently the only supported input type.

    Image Preprocessing Object

    Attribute Type Description
    dimension_order string A value describing the order of the dimensions in a batch
    N = batch_size, C = Channel, H = Height, W = Width
    Possible values are:
    • NCHW
    • NCWH
    • NHWC
    • NWHC
    resize_type string Possible values are:
    • SQUASH: image is resized to fit network input, loosing aspect ratio.
    • CROP: image is resized so that the smallest side fits the network input, the rest is cropped.
    • FILL: image is resized so that the largest side fits the network input, the rest is filled with white.
    • NETWORK: image is resized so that its largest side fits target_size (see below) and the network is reshaped accordingly.
    target_size string Target size of the input image. It might have multiple formats. In the following W, H and N denote integer numbers, W (and H) being used specifically for width (and height), respectively:
    • WxH: image is resized so that width and height fit the specified sizes.
    • N: image is resized so that the largest side of the input image matches the specified number of pixels.
    color_channels string Might be RGB, BGR or L (for gray levels).
    pixel_scaling float Pixel values will normalized between 0 and pixel_scaling before mean substraction.
    mean_file string Name of the file containing the mean to substract from input, see create a new network. It might either be a Caffe mean file with a .binaryproto extension, or a numpy serialized array with .npy extension.

    Create a Network

    Creates a new custom network after you have trained a model of your own on your local computer.

    Definition

    POST https://api.deepomatic.com/v0.7/networks
    client.Network.create(...)

    Arguments

    Parameter Type Default Description
    name string A short name for your network.
    description string "" A longer description of your network.
    metadata object {} A JSON field containing any kind of information that you may find interesting to store.
    framework string A string describing which framework to use for your network. Possible values are:
    • nv-caffe-0.x-mod: A version of Caffe modified by NVIDIA and deepomatic to add support for Faster-RCNN. Currently version 0.16.4
    • tensorflow-1.x: Tensorflow: currently version 1.4
    preprocessing object A preprocessing object to describe how input data should be pre-processed. Once the network is created, you cannot modify this field anymore.
    <additionnal-files> file Extra files for network graph and weights, as well as mean files needed by the preprocessing. See below.

    Files for Caffe (framework = nv-caffe-0.x-mod)

    You need to specify at least those two files:

    Files for Tensorflow (framework = tensorflow-1.x)

    You need to specify at least one of those files:

    If the saved model does not embed the variables' weights, you may need to specify those additionnal files:

    Example Request

    # We download the Caffe a GoogleNet pre-trained network
    curl -o /tmp/deploy.prototxt https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt
    curl -o /tmp/snapshot.caffemodel http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel
    curl -o /tmp/caffe_ilsvrc12.tar.gz http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
    tar -zxvf /tmp/caffe_ilsvrc12.tar.gz -C /tmp
    # Now proceed to upload
    curl https://api.deepomatic.com/v0.7/networks \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -F name='my new network' \
    -F description='trained with more images' \
    -F metadata='{"author": "me", "project": "Go to mars"}' \
    -F framework='nv-caffe-0.x-mod' \
    -F preprocessing='{"inputs": [{"tensor_name": "data","image": {"dimension_order":"NCHW", "target_size":"224x224", "resize_type":"SQUASH", "mean_file": "mean_file_1.binaryproto", "color_channels": "BGR", "pixel_scaling": 255.0, "data_type": "FLOAT32"}}], "batched_output": true}' \
    -F deploy.prototxt=@/tmp/deploy.prototxt \
    -F snapshot.caffemodel=@/tmp/snapshot.caffemodel \
    -F mean_file_1.binaryproto=@/tmp/imagenet_mean.binaryproto
    import sys, tarfile
    if sys.version_info >= (3, 0):
    from urllib.request import urlretrieve
    else:
    from urllib import urlretrieve
    # Initialize the client
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    # Helper function to download demo resources for the Caffe pre-trained networks
    def download(url, local_path):
    if not os.path.isfile(local_path):
    print("Downloading {} to {}".format(url, local_path))
    urlretrieve(url, local_path)
    if url.endswith('.tar.gz'):
    tar = tarfile.open(local_path, "r:gz")
    tar.extractall(path='/tmp/')
    tar.close()
    else:
    print("Skipping download of {} to {}: file already exist".format(url, local_path))
    return local_path
    # We download the Caffe a GoogleNet pre-trained network
    deploy_prototxt = download('https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt', '/tmp/deploy.prototxt')
    snapshot_caffemodel = download('http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel', '/tmp/snapshot.caffemodel')
    mean_file = download('http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz', '/tmp/imagenet_mean.binaryproto')
    # Here, we define the network preprocessing.
    # Please refer to the documentation to see what each field is used for.
    preprocessing = {
    "inputs": [
    {
    "tensor_name": "data",
    "image": {
    "color_channels": "BGR",
    "target_size": "224x224",
    "resize_type": "SQUASH",
    "mean_file": "mean.binaryproto",
    "dimension_order": "NCHW",
    "pixel_scaling": 255.0,
    "data_type": "FLOAT32"
    }
    }
    ],
    "batched_output": True
    }
    # We now register the three files needed by our network
    files = {
    'deploy.prototxt': deploy_prototxt,
    'snapshot.caffemodel': snapshot_caffemodel,
    'mean.binaryproto': mean_file
    }
    # We upload the network
    network = client.Network.create(
    name="My first network",
    framework='nv-caffe-0.x-mod',
    preprocessing=preprocessing,
    files=files
    )

    Additionnal files for preprocessing

    You might also include any additional file as required by you various input types, for exemple any mean file named as you like and whose name is refered by the mean_file parameter field of a preprocessing object) (as long it has one of the supported extensions, see the mean_file. Please refer to Saving mean files on the right panel to find out how to save you mean files before sending them to the API.

    Saving mean files

    In order to save numpy tensor means to files before sending them to the API, please proceed like this:

    import numpy as np
    # example mean file when `dimension_order == "HWC"` and H = 1, W = 1 and C = 3
    # typically, your mean image as been compute on the training images and you already
    # have this tensor available.
    example_mean_file = np.ones((1, 1, 3))
    # Save this mean to 'mean.npy'
    with open('mean.npy', 'wb') as f:
    np.save(f, mean, allow_pickle=False)
    # You can now use `"mean_file": "mean.npy"` in the preprocessing JSON
    # {
    # ...
    # "mean_file": "mean.npy"
    # ...
    # }

    Response

    A neural network object

    Example Response

    {
    "id": 42,
    "name": "My first network",
    "description": "A neural network trained on some data",
    "task_id": 123,
    "update_date": "2018-02-16T16:37:25.148189Z",
    "metadata": {
    "any": "value"
    },
    "preprocessing": {
    "inputs": [
    {
    "tensor_name": "data",
    "image": {
    "dimension_order": "NCHW",
    "target_size": "224x224",
    "resize_type": "SQUASH",
    "mean_file": "mean.proto.bin",
    "color_channels": "BGR",
    "pixel_scaling": 255.0,
    "data_type": "FLOAT32"
    }
    }
    ],
    "batched_output": true
    }
    }

    List Networks

    Get the list of existing neural networks.

    Definition

    # To list public networks, use:
    GET https://api.deepomatic.com/v0.7/networks/public
    # To list your own networks, use:
    GET https://api.deepomatic.com/v0.7/networks
    # To list public networks, use:
    client.Network.list(public=True)
    # To list your own networks, use:
    client.Network.list()

    Example Request

    # For public networks:
    curl https://api.deepomatic.com/v0.7/networks/public \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    # For private networks:
    curl https://api.deepomatic.com/v0.7/networks \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    # For public networks:
    for network in client.Network.list(public=True):
    print(network)
    # For private networks:
    for network in client.Network.list():
    print(network)

    Response

    A paginated list of responses.

    Attribute Type Description
    count int The total number of results.
    next string The URL to the next page.
    previous string The URL to the previous page.
    results array(object) A list of your neural networks objects

    Example Response

    {
    "count": 1,
    "next": null,
    "previous": null,
    "results": [
    {
    "id": 1,
    "name": "Alexnet",
    "description": "Alexnet",
    "task_id": "123",
    "update_date": "2018-02-16T13:45:36.078955Z",
    "metadata": {},
    "preprocessing": {
    "inputs": [
    {
    "tensor_name": "data",
    "image": {
    "dimension_order": "NCHW",
    "target_size": "224x224",
    "resize_type": "SQUASH",
    "mean_file": "data_mean.proto.bin",
    "color_channels": "BGR",
    "pixel_scaling": 255.0,
    "data_type": "FLOAT32"
    }
    }
    ],
    "batched_output": true
    }
    }
    ]
    }

    Retrieve a Network

    Retrieve a neural network by ID.

    Definition

    # To retrieve a public network, use:
    GET https://api.deepomatic.com/v0.7/networks/public/{NETWORK_ID}
    # To retrieve your own network, use:
    GET https://api.deepomatic.com/v0.7/networks/{NETWORK_ID}
    # {NETWORK_ID} may be a string for a public
    # network or an integer for your own network.
    client.Network.retrieve({NETWORK_ID})

    Arguments

    Parameter Type Default Description
    network_id int The ID of the neural network to get.

    Example Request

    # For a public network:
    curl https://api.deepomatic.com/v0.7/networks/public/imagenet-inception-v1 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    # For a private network:
    curl https://api.deepomatic.com/v0.7/networks/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    # For a public network:
    client.Network.retrieve("imagenet-inception-v1")
    # For a private network:
    client.Network.retrieve(42)

    Response

    A neural networks object.

    Example Response

    {
    "id": 1,
    "name": "Alexnet",
    "description": "Alexnet",
    "task_id": "123",
    "update_date": "2018-02-16T13:45:36.078955Z",
    "metadata": {},
    "preprocessing": {
    "inputs": [
    {
    "tensor_name": "data",
    "image": {
    "dimension_order": "NCHW",
    "target_size": "224x224",
    "resize_type": "SQUASH",
    "mean_file": "data_mean.proto.bin",
    "color_channels": "BGR",
    "pixel_scaling": 255.0,
    "data_type": "FLOAT32"
    }
    }
    ],
    "batched_output": true
    }
    }

    Edit a Network

    Updates the specified network by setting the values of the parameters passed. Any parameters not provided will be left unchanged.

    This request accepts only the name, name and metadata arguments. Other values are immutable.

    Definition

    PATCH https://api.deepomatic.com/v0.7/networks/{NETWORK_ID}
    network = client.Network.retrieve({NETWORK_ID})
    network.update(...)

    Arguments

    Parameter Type Attributes Description
    name string optionnal A short name for your network.
    description string optionnal A longer description of your network.
    metadata object optionnal A JSON field containing any kind of information that you may find interesting to store.

    Example Request

    curl https://api.deepomatic.com/v0.7/networks/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d '{"name": "new name", "description":"new description"}' \
    -X PATCH
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    network = client.Network.retrieve(42)
    network.update(
    name="new name",
    description="new description"
    )

    Response

    A neural networks object.

    Delete a Network

    Permanently deletes a network. It cannot be undone. Attached resources like recognition versions will also be suppressed.

    Definition

    DELETE https://api.deepomatic.com/v0.7/networks/{NETWORK_ID}
    network = client.Network.retrieve({NETWORK_ID})
    network.delete()

    Arguments

    Parameter Type Default Description
    id int The Neural Network ID to delete.

    Example Request

    curl https://api.deepomatic.com/v0.7/networks/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -X DELETE
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    network = client.Network.retrieve(42)
    network.delete()

    Response

    Return 204 (no content).

    Recognition Specification

    The Specification Object

    A recognition specification describes the output format of your recognition algorithm. You may attach multiple algorithms that perform the same task to the same specification in order to leverage automatic updates of your embedded recognition models.

    Attribute Type Attributes Description
    id int (string) read-only, string for public The ID of the recognition specification. This field is a string for public recognition models.
    name string A short name for your recognition specification.
    description string A longer description of your recognition specification.
    update_date string read-only Date time (ISO 8601 format) of the last update of the recognition specification.
    metadata object A JSON field containing any kind of information that you may find interesting to store.
    current_version_id int nullable, hidden for public The ID of the current recognition version object that this specification will execute if you ask it to perform an inference. This is convenient if you want to allow your app to point to a constant API endpoint while keeping the possibility to smoothly update the recognition model behind. This field is hidden for public recognition models
    outputs array(object) hidden The specification of the outputs you would like to recognize. Its an array of output objects. As this field tends to be large, it is hidden when you access the list of recognition models.

    Output Object

    Attribute Type Description
    labels object An output of type labels.

    Labels Output Object

    Attribute Type Description
    labels array(object) A list of labels objects that will be recognized by your model.
    exclusive bool A boolean describing if the declared labels are mutually exclusive or not.
    roi string ROI stands for "Region Of Interest". Possible values are:
    • NONE: if your model if performing classification only without object localization
    • BBOX: if your model can also output bounding boxes for the multiple objects detected in the image.

    Label Object

    Attribute Type Description
    id int  The numeric ID of your label. Can be anything you want, this ID will be present in the inference response for you to use it.
    name string The name of the label. It will also be present in the inference response.

    Create a Specification

    Creates a new recognition specification.

    Definition

    POST https://api.deepomatic.com/v0.7/recognition/specs
    client.RecognitionSpec.create(...)

    Arguments

    Parameter Type Default Description
    name string A short name for your recognition model.
    description string "" A longer description of your recognition model.
    metadata object {} A JSON field containing any kind of information that you may find interesting to store.
    outputs object An output object to describe how input data should be pre-processed.

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/specs \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d '{
    "name": "My recognition model",
    "description": "To recognize various types of birds",
    "metadata": {"author": "me", "project": "Birds 101"},
    "outputs": [{"labels": {"roi": "NONE", "exclusive": true, "labels": [{"id": 0, "name": "hot-dog"}, {"id": 1, "name": "not hot-dog"}]}}]
    }' \
    -H "Content-Type: application/json"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    client.RecognitionSpec.create(
    name="hot-dog VS not hot-dog classifier",
    description="My great hot-dog VS not hot-dog classifier !",
    metadata={
    "author": "me",
    "project": "my secret project"
    },
    outputs = [{
    "labels": {
    "roi": "NONE",
    "exclusive": True,
    "labels": [{
    "id": 0,
    "name": "hot-dog"
    }, {
    "id": 1,
    "name": "not hot-dog"
    }]
    }
    }]
    )

    Response

    A recognition specification object.

    Example Response

    {
    "id": 42,
    "name": "hot-dog VS not hot-dog classifier",
    "description": "My great hot-dog VS not hot-dog classifier !",
    "task_id": 123,
    "update_date": "2018-02-16T16:37:25.148189Z",
    "metadata": {
    "author": "me",
    "project": "my secret project"
    },
    "outputs": [{
    "labels": {
    "roi": "NONE",
    "exclusive": true,
    "labels": [{
    "id": 0,
    "name": "hot-dog"
    }, {
    "id": 1,
    "name": "not hot-dog"
    }]
    }
    }],
    "current_version_id": null
    }

    List Specifications

    Get the list of existing recognition specifications.

    Definition

    # To list public specifications, use:
    GET https://api.deepomatic.com/v0.7/recognition/public
    # To list your own specifications, use:
    GET https://api.deepomatic.com/v0.7/recognition/specs
    # To list public specifications, use:
    client.RecognitionSpec.list(public=True)
    # To list your own specifications, use:
    client.RecognitionSpec.list()

    Example Request

    # For public specifications:
    curl https://api.deepomatic.com/v0.7/recognition/public \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    # For private networks:
    curl https://api.deepomatic.com/v0.7/recognition/specs \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    # For public specifications:
    for spec in client.RecognitionSpec.list(public=True):
    print(spec)
    # For private specifications:
    for spec in client.RecognitionSpec.list():
    print(spec)

    Response

    A paginated list of reponses.

    Attribute Type Description
    count int The total number of results.
    next string The URL to the next page.
    previous string The URL to the previous page.
    results array(object) A list of your recognition specification objects. Please note that the output field is not present and that current_version_id is unavailable for public recognition models.

    Example Response

    {
    "count": 2,
    "next": null,
    "previous": null,
    "results": [
    {
    "id": 42,
    "name": "My great hot-dog VS not hot-dog classifier !",
    "description": "Very complicated classifier",
    "update_date": "2018-03-09T18:30:43.404610Z",
    "current_version_id": 1,
    "metadata": {}
    },
    ...
    ]
    }

    Get a Specification

    Retrieve a recognition specification by ID.

    Definition

    # To retrieve a public specification, use:
    GET https://api.deepomatic.com/v0.7/recognition/public/{SPEC_ID}
    # To retrieve your own specification, use:
    GET https://api.deepomatic.com/v0.7/recognition/specs/{SPEC_ID}
    # {SPEC_ID} may be a string for a public specification
    # or an integer for your own specification.
    client.RecognitionSpec.retrieve({SPEC_ID})

    Arguments

    Parameter Type Default Description
    spec_id int The ID of the recognition specification to get.

    Example Request

    # For a public specification:
    curl https://api.deepomatic.com/v0.7/recognition/public/fashion-v4 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    # For a private specification:
    curl https://api.deepomatic.com/v0.7/recognition/specs/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    # For a public specification:
    client.RecognitionSpec.retrieve("fashion-v4")
    # For a private specification:
    client.RecognitionSpec.retrieve(42)

    Response

    A recognition specification object.

    Example Response

    {
    "id": "fashion-v4",
    "name": "Fashion detector",
    "description": "",
    "update_date": "2018-03-08T19:24:26.528815Z",
    "metadata": {},
    "outputs": [
    {
    "labels": {
    "roi": "BBOX",
    "exclusive": true,
    "labels": [
    {
    "id": 0,
    "name": "sweater"
    },
    {
    "id": 1,
    "name": "hat"
    },
    {
    "id": 2,
    "name": "dress"
    },
    {
    "id": 3,
    "name": "bag"
    },
    {
    "id": 4,
    "name": "jacket-coat"
    },
    {
    "id": 5,
    "name": "shoe"
    },
    {
    "id": 6,
    "name": "pants"
    },
    {
    "id": 7,
    "name": "suit"
    },
    {
    "id": 8,
    "name": "skirt"
    },
    {
    "id": 9,
    "name": "sunglasses"
    },
    {
    "id": 10,
    "name": "romper"
    },
    {
    "id": 11,
    "name": "top-shirt"
    },
    {
    "id": 12,
    "name": "jumpsuit"
    },
    {
    "id": 13,
    "name": "shorts"
    },
    {
    "id": 14,
    "name": "swimwear"
    }
    ]
    }
    }
    ]
    }

    Edit a Specification

    Updates the specified specification by setting the values of the parameters passed. Any parameters not provided will be left unchanged.

    This request accepts only the name, name, metadata and current_version_id arguments. Other values are immutable.

    Definition

    PATCH https://api.deepomatic.com/v0.7/recognition/specs/{SPEC_ID}
    spec = client.RecognitionSpec.retrieve({SPEC_ID})
    spec.update(...)

    Arguments

    Parameter Type Attributes Description
    name string optionnal A short name for your network.
    description string optionnal A longer description of your network.
    metadata object optionnal A JSON field containing any kind of information that you may find interesting to store.
    current_version_id int optionnal The ID of the current recognition version object that this specification will execute if you ask it to perform an inference. This is convenient if you want to allow your app to point to a constant API endpoint while keeping the possibility to smoothly update the recognition model behind.

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/specs/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d name='new name' \
    -d current_version_id=123 \
    -X PATCH
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    spec = client.RecognitionSpec.retrieve(42)
    spec.update(
    name="new name",
    current_version_id=123
    )

    Response

    A recognition specification object.

    Delete a Specification

    Permanently deletes a recognition specification. It cannot be undone. Attached resources like recognition versions will also be suppressed.

    Definition

    DELETE https://api.deepomatic.com/v0.7/recognition/specs/{SPEC_ID}
    spec = client.RecognitionSpec.retrieve({SPEC_ID})
    spec.delete()

    Arguments

    Parameter Type Default Description
    spec_id int The ID of the specification to delete.

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/specs/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -X DELETE
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    spec = client.RecognitionSpec.retrieve(42)
    spec.delete()

    Response

    Return 204 (no content).

    Specification Inference

    Run a inference on the current version of the specification (therefore, its current_version_id field must not be null). This endpoint returns a task ID.

    Definition

    # To run inference on a public specification, use:
    POST https://api.deepomatic.com/v0.7/recognition/public/{SPEC_ID}/inference
    # To run inference on your own specification, use:
    POST https://api.deepomatic.com/v0.7/recognition/specs/{SPEC_ID}/inference
    # {SPEC_ID} may be a string for a public specification
    # or an integer for your own specification.
    spec = client.RecognitionSpec.retrieve({SPEC_ID})
    spec.inference(...)

    Arguments

    Parameter Type Default Description
    spec_id int The neural network ID
    inputs array(object) The inputs of the neural network as an array of input objects. Must be non empty.
    show_discarded bool false A boolean indicating if the response must include labels which did not pass the recognition threshold.
    max_predictions int 100 The maximum number of predicted and discarded objects to return.

    Example Request

    URL=https://static.deepomatic.com/resources/demos/api-clients/dog2.jpg
    # Inference from an URL:
    curl https://api.deepomatic.com/v0.7/recognition/public/fashion-v4/inference \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d inputs[0]image.source="${URL}" \
    -d show_discarded="True" \
    -H "Content-Type: application/json" \
    -d max_predictions=100
    # You can also directly send an image file or binary content using multipart/form-data:
    curl ${URL} > /tmp/img.jpg
    curl https://api.deepomatic.com/v0.7/recognition/public/fashion-v4/inference \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -F inputs[0]image.source=@/tmp/img.jpg
    # You can also send base64 data by prefixing it with 'data:image/*;base64,' and sending it as application/json:
    BASE64_DATA=$(cat /tmp/img.jpg | base64)
    curl https://api.deepomatic.com/v0.7/recognition/public/fashion-v4/inference \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -H "Content-Type: application/json" \
    -d inputs[0]image.source="data:image/*%3Bbase64,${BASE64_DATA}"
    import base64
    import sys, tarfile
    if sys.version_info >= (3, 0):
    from urllib.request import urlretrieve
    else:
    from urllib import urlretrieve
    from deepomatic import ImageInput
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    spec = client.RecognitionSpec.retrieve("fashion-v4")
    # Inference from an URL:
    url = "https://static.deepomatic.com/resources/demos/api-clients/dog2.jpg"
    spec.inference(inputs=[ImageInput(url)], show_discarded=True, max_predictions=100)
    # You can also directly send an image file:
    urlretrieve(url, '/tmp/img.jpg')
    with open('/tmp/img.jpg', 'rb') as fp:
    spec.inference(inputs=[ImageInput(fp)])
    # You can also send binary data:
    with open('/tmp/img.jpg', 'rb') as fp:
    binary_data = fp.read()
    spec.inference(inputs=[ImageInput(binary_data, encoding="binary")])
    # If you finally want to send base64 data, you can use:
    base64_data = base64.b64encode(binary_data)
    spec.inference(inputs=[ImageInput(base64_data, encoding="base64")])

    Response

    The task data will contain the list of request tensors, in base64 format:

    Attribute Type Description
    outputs array(object) An array of inference output objects. The ith element of the array corresponds to the result of the ith elements of the specification outputs field and version post_processings field.

    Inference Output Object

    This object is directly related to the output object of the specification: they both have the same unique field. It stores the recognition inference output.

    Attribute Type Description
    labels object An output of type labels.

    Inference Labels Output Object

    Attribute Type Description
    predicted array(object) An array of prediction object. This field stores the list of recognition hypotheses whose score is above the recognition threshold.
    discarded array(object) An array of prediction object. If you passed show_discared=true in the inference request, this field will store the list of recognition hypotheses whose score did not reached the recognition threshold.

    Prediction Object

    Stores information related to an object hypothesis.

    Attribute Type Description
    label_id int The recognized label ID from the specification's label object.
    label_name string The recognized label name from the specification's label object.
    score float The recognition score.
    threshold float The recognition threshold that was defined by the recognition version post-processing.
    sequence_index  int The position of the prediction if the input data was a sequence of inputs (e.g. when passing a video for a network that accepts images), null otherwise.
    sequence_time  float If the input data is a time serie, it represents the position of the prediction corresponding to sequence_index in seconds, null otherwise.
    roi object  (optionnal) If the roi field of the corresponding labels output object is not "NONE", this field will store a ROI object.

    ROI Object

    "ROI" stands for "Region Of Interest" and describes the position of an object.

    Attribute Type Description
    region_id int The region ID. It might not be unique among all the returned prediction objects. It can be used in conjuction with show_discarded=true to group the predicted and discarded fields by region_id to identify alternate labels for a given region in case of an hesitation.
    bbox object (optionnal) Present if and only if the region type is "BBOX" (see labels output object).

    Example Response

    {
    "task_id": "123"
    }

    Example of Task Data

    {
    "outputs": [{
    "labels": {
    "predicted": [{
    "label_id": 9,
    "label_name": "sunglasses",
    "score": 0.990304172,
    "threshold": 0.347,
    "roi": {
    "region_id": 1,
    "bbox": {
    "xmin": 0.312604159,
    "ymin": 0.366485775,
    "ymax": 0.5318923,
    "xmax": 0.666821837
    }
    }
    }],
    "discarded": []
    }
    }]
    }

    Recognition Version

    The Version Object

    A recognition version implements a specification: this is the link between a specification and a neural network.

    Attribute Type Attributes Description
    id int read-only The ID of the recognition version.
    spec_id int immutable The ID of the parent recognition specification.
    network_id int immutable The ID of the neural network which will cary on the computation.
    post_processings object immutable The post-processing object that defines some network specific adjustments like the output tensor, some thresholds, etc. The length of this array must exactly match the length of the outputs field of the parent specification and the ith post-processing will be matched to the ith output.
    spec_name string read-only The name of the recognition specification corresponding to spec_id. This is convenient for display purposes.
    network_name string read-only The name of the neural network corresponding to network_id. This is convenient for display purposes.
    update_date string read-only Date time (ISO 8601 format) of the creation specification version.

    Post-processing Object

    Attribute Type Description
    classification object A post-processing of type classification for an output of type labels.
    detection object A post-processing of type detection for an output of type labels.

    Classification Post-processing Object

    Attribute Type Description
    output_tensor string The name of the network tensor that holds the classification scores.
    thresholds array(float) A list of threshold for each label of the recognition specification. The label will be considered present if its score is greater than the threshold. The length of this array must exactly match the length of the labels field of the parent labels specification and the ith threshold will be matched to the ith label.

    Detection Post-processing Object

    Attribute Type Description
    anchored_output object (optional) Some neural networks output some anchor bouding boxes together with the offsets to apply to those anchors to get the final bounding boxes. Use this if this is the case of your network.
    This object must have the following fields:
    • scores_tensor: the name of the output tensor containing the scores for each label and box. It must be of size N x (L + 1) where N is the number of region proposals and L is the number of labels in the recognition specification. The background score must be the additionnal first column of the tensor.
    • anchors_tensor: the name of the output tensor containing the anchors. It must be of size N x 5 where the 1st column is the region proposal score, the 2nd and 3rd columns are the x and y coordinates of the upper-left corner and the 4th and 5th columns are its width and height.
    • offsets_tensor: the name of the output tensor containing the offsets for each classe (including background) and anchor. It must be of size N * (L + 1) * 4 where the first 4 columns correspond to the background label and column order is the same as above: x, y, width, height.
    direct_output object (optional) Some neural networks directly output the final bounding boxes. Use this if this is the case of your network.
    This object must have the following fields:
    • scores_tensor: the name of the output tensor containing the scores for each label and box. It must be of size N x 1 where N is the number of region proposals in the recognition specification.
    • classes_tensor: the name of the output tensor containing the classes for each label and box. It must be of size N x 1 where N is the number of region proposals in the recognition specification. Be careful, it will contain the ids of the classes mapped in the specification, not the row number of the corresponding label.
    • boxes_tensor: the name of the output tensor containing the boxes. It must be of size N x 4 where the 4 columns correspond to ymin, xmin, ymax and xmax coordinates of the bounding boxes, xmin and ymin being the coordinates of the upper-left corner.
    yolo_output object (optional) Yolo typically outputs, in the same tensor, both (i) the class scores and (ii) the offsets regarding to some predefined anchors. Use this if you are deploying a Yolo network.
    This object must have the following fields:
    • anchors: an array of X and Y dimensions for the anchors as found in the .meta file generated at training time. It must be of length B * 2 where B is the number of boxes per anchor. It starts with the X dimension (width) of the first box, then its Y dimension (height), then the width of the second box, etc...
    • output_tensor: the name of the output tensor containing both the scores for each label and the offset comparing to anchor boxes. It must be of size H x W x (B * (5 + L)) where H and W can be any number (those are the final tensor height and width), B is the same as for the anchors field and L is the number of labels.
    thresholds array(float) A list of threshold for each label of the recognition specification. The label will be considered present if its score is greater than the threshold. The length of this array must exactly match the length of the labels field of the parent labels specification and the ith threshold will be matched to the ith label.
    nms_threshold float The Jaccard index threshold that will be applied to NMS to decide if two boxes of the same label represent the same object.
    normalize_wrt_tensor string (optional) If your neural network outputs coordinates which are not normalized, use this field to specify a tensor that would hold the input image size (image height must be the first element of the tensor, image width the second element). Can be left blank if not used.

    Create a Version

    Creates a new recognition version.

    Definition

    POST https://api.deepomatic.com/v0.7/recognition/versions
    client.RecognitionVersion.create(...)

    Arguments

    Parameter Type Default Description
    spec_id int The ID of the parent recognition specification.
    network_id int The ID of the neural network which will cary on the computation.
    post_processings object The post-processing object that defines some network specific adjustments like the output tensor, some thresholds, etc. The length of this array must exactly match the length of the outputs field of the parent specification and the ith post-processing will be matched to the ith output.

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/versions \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d '{
    "spec_id": 42,
    "network_id": 123,
    "post_processings": [{"classification": {"output_tensor": "inception_v3/logits/predictions", "thresholds": [0.025, 0.025]}}]
    }' \
    -H "Content-Type: application/json"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    client.RecognitionVersion.create(
    spec_id=42,
    network_id=123,
    post_processings=[{
    "classification": {
    "output_tensor": "inception_v3/logits/predictions",
    "thresholds": [
    0.5,
    0.5
    ]
    }
    }]
    )

    Response

    A recognition version object.

    Example Response

    {
    "id": 1,
    "spec_id": 42,
    "spec_name": "hot-dog VS not hot-dog classifier",
    "network_id": 123,
    "network_name": "hot-dog VS not hot-dog classifier",
    "update_date": "2018-03-09T18:30:43.404610Z",
    "post_processings": [{
    "classification": {
    "output_tensor": "inception_v3/logits/predictions",
    "thresholds": [
    0.5,
    0.5
    ]
    }
    }]
    }

    List Versions

    Get the list of existing recognition versions.

    Definition

    # To access all your versions, use:
    GET https://api.deepomatic.com/v0.7/recognition/versions
    # To access versions attached to a given recognition spec, use:
    GET https://api.deepomatic.com/v0.7/recognition/specs/{SPEC_ID}/versions
    # To access all your versions, use:
    client.RecognitionVersion.list()
    # To access versions attached to a given recognition spec, use:
    client.RecognitionSpec.retrieve({SPEC_ID}).versions()

    Example Request

    # To access all your versions:
    curl https://api.deepomatic.com/v0.7/recognition/versions \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    # To access versions attached to a given recognition spec, use:
    curl https://api.deepomatic.com/v0.7/recognition/specs/42/versions \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    # To access all your versions, use:
    for version in client.RecognitionVersion.list():
    print(version)
    # To access versions attached to a given recognition spec, use:
    for version in client.RecognitionSpec.retrieve(42).versions():
    print(version)

    Response

    A paginated list of reponses.

    Attribute Type Description
    count int The total number of results.
    next string The URL to the next page.
    previous string The URL to the previous page.
    results array(object) A list of your recognition version objects. Please note that the post_processings field is not present.

    Example Response

    {
    "count": 2,
    "next": null,
    "previous": null,
    "results": [
    {
    "id": 1,
    "spec_id": 42,
    "spec_name": "hot-dog VS not hot-dog classifier",
    "network_id": 123,
    "network_name": "hot-dog VS not hot-dog classifier",
    "update_date": "2018-03-09T18:30:43.404610Z"
    },
    ...
    ]
    }

    Get a Version

    Retrieve a recognition version by ID.

    GET https://api.deepomatic.com/v0.7/recognition/versions/{VERSION_ID}
    client.RecognitionVersion.retrieve({VERSION_ID})

    Arguments

    Parameter Type Default Description
    version_id int The ID of the version to retrieve.

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/versions/1 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    client.RecognitionVersion.retrieve(1)

    Response

    A recognition version object.

    Delete a Version

    Permanently deletes a recognition version. It cannot be undone.

    Definition

    DELETE https://api.deepomatic.com/v0.7/recognition/versions/{VERSION_ID}
    version = client.RecognitionVersion.retrieve({VERSION_ID})
    version.delete()

    Arguments

    Parameter Type Default Description
    version_id int The ID of the version to delete.

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/versions/42 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -X DELETE
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    version = client.RecognitionVersion.retrieve(42)
    version.delete()

    Response

    Return 204 (no content).

    Version Inference

    Run a inference on this specification version. This endpoint returns a task ID. Please refer to the Specification Inference to have a comprehensive list of the inference request arguments and response.

    Definition

    POST https://api.deepomatic.com/v0.7/recognition/versions/{VERSION_ID}/inference
    version = client.RecognitionVersion.retrieve({VERSION_ID})
    version.inference(...)

    Example Request

    curl https://api.deepomatic.com/v0.7/recognition/versions/1/inference \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}" \
    -d '{"inputs": [{"image": {"source": "https://static.deepomatic.com/resources/demos/api-clients/dog2.jpg"}}]}'
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    version = client.RecognitionVersion.retrieve(1)
    url = "https://static.deepomatic.com/resources/demos/api-clients/dog2.jpg"
    version.inference(inputs=[deepomatic.ImageInput(url)])

    Example Response

    {
    "task_id": "123"
    }

    Tasks

    The task object

    Some endpoints do not return a direct response. Instead they return a task_id which will contain the results once ready. In most cases you will need to get the task until the status is not "pending". The status will then either be:

    Attribute Type Description
    id string The task ID
    status string The task status, either "pending", "success", or "error"
    error string Defined in case of error, it is the error message
    date_created  string The creation data timestamp
    date_updated  string The timestamp where status switched from "pending" to another status
    data object The response JSON of the endpoint that generated the task

    Example of task object

    {
    "id": "269999729",
    "status": "success",
    "error": null,
    "date_created": "2018-03-10T20:38:12.818792Z",
    "date_updated": "2018-03-10T20:38:13.032942Z",
    "data": {
    "outputs": [
    {
    "labels": {
    "discarded": [],
    "predicted": [
    {
    "threshold": 0.025,
    "label_id": 207,
    "score": 0.952849746,
    "label_name": "golden retriever"
    }
    ]
    }
    }
    ]
    },
    "subtasks": null
    }

    Get task status

    This endpoint retrieves the state of a given task.

    Definition

    GET https://api.deepomatic.com/v0.7/tasks/{TASK_ID}
    client.Task.retrieve({TASK_ID})

    Arguments

    Parameter Type Default Description
    task_id string The task ID

    Example Request

    curl https://api.deepomatic.com/v0.7/tasks/269999729 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    import os, deepomatic
    client = deepomatic.Client(os.getenv('DEEPOMATIC_APP_ID'), os.getenv('DEEPOMATIC_API_KEY'))
    client.Task.retrieve(269999729)

    Response

    It returns a task object.

    Example Response

    {
    "id": "269999729",
    "status": "success",
    "error": null,
    "date_created": "2018-03-10T20:38:12.818792Z",
    "date_updated": "2018-03-10T20:38:13.032942Z",
    "data": {
    "outputs": [
    {
    "labels": {
    "discarded": [],
    "predicted": [
    {
    "threshold": 0.025,
    "label_id": 207,
    "score": 0.952849746,
    "label_name": "golden retriever"
    }
    ]
    }
    }
    ]
    },
    "subtasks": null
    }

    Get multiple task status

    Retrieve several tasks status given their IDs.

    Definition

    GET https://api.deepomatic.com/v0.7/tasks
    tasks = client.Task.list(task_ids=[...])

    Arguments

    Parameter Type Default Description
    task_ids array(string) The task IDs as a JSON array of strings

    Example Request

    curl https://api.deepomatic.com/v0.7/tasks?task_ids=269999729&task_ids=269999730 \
    -H "X-APP-ID: ${DEEPOMATIC_APP_ID}" -H "X-API-KEY: ${DEEPOMATIC_API_KEY}"
    tasks = client.Task.list(task_ids=[269999729, 269999730])

    Response

    A paginated list of task objects.

    Attribute Type Description
    count int The total number of results.
    next string The URL to the next page.
    previous string The URL to the previous page.
    results array(object) A list of task objects.

    Example Response

    {
    "count": 2,
    "next": null,
    "previous": null,
    "results": [
    {
    "id": "269999729",
    "status": "success",
    "error": null,
    "date_created": "2018-03-10T20:38:12.818792Z",
    "date_updated": "2018-03-10T20:38:13.032942Z",
    "data": {
    "outputs": [
    {
    "labels": {
    "predicted": [
    {
    "threshold": 0.025,
    "label_id": 207,
    "score": 0.952849746,
    "label_name": "golden retriever"
    }
    ],
    "discarded": []
    }
    }
    ]
    },
    "subtasks": null
    },
    {
    "id": "269999730",
    "status": "success",
    "error": null,
    "date_created": "2018-03-10T20:39:47.346716Z",
    "date_updated": "2018-03-10T20:39:47.553246Z",
    "data": {
    "outputs": [
    {
    "labels": {
    "predicted": [
    {
    "threshold": 0.025,
    "label_id": 207,
    "score": 0.952849746,
    "label_name": "golden retriever"
    }
    ],
    "discarded": []
    }
    }
    ]
    },
    "subtasks": null
    }
    ]
    }

    Common objects

    Input Object

    The input object describes the data to send as input to the network. You must specify exactly one key among the possible input types. Currently only the image inputs are supported.

    Parameter Type Default Description
    image object An image input.
    video object A video input.

    Image Input Object

    Attribute Type Description
    source string May have various forms:
    • The URL of the image
    • The base64 encoded content of the image prefixed by data:image/{:format};base64,.
    • The binary content of the image prefixed by data:image/{:format};binary,
    In the two last cases, {:format} is the image format (jpeg, png, etc...). If you don't know about the format just use *.
    bbox object A bounding box object to crop the image.
    polygon array(object) An array of point objects of size at least 3 to crop the image.

    Image Input Object

    Attribute Type Description
    source string May have various forms:
    • The URL of the image
    • The base64 encoded content of the image prefixed by data:image/{:format};base64,.
    • The binary content of the image prefixed by data:image/{:format};binary,
    In the two last cases, {:format} is the image format (jpeg, png, etc...). If you don't know about the format just use *.
    process_fps float The number of inference per second requested. Defaults to 1. Use a float number lower than one if you want to process less than one frame per second (e.g. passing 0.5 will result in a inference every 2 seconds).

    The point object

    A point is usally used to define a polygon in order to delimitate a non rectangle sub-part of the image. Coordinates are normalized to the image width and height. Origin is at the top-left corner.

    Attribute Type Description
    x float x-coordinate of the point
    y float y-coordinate of the point

    Example of point object

    {
    "x" : 0.1,
    "y" : 0.1
    }

    The bounding box object

    A bounding box is a rectangle used to delimitate a sub-part of an image. Coordinates are normalized to the image width and height. Origin is at the top-left corner.

    Attribute Type Description
    xmin float x-coordinate of the top of the bounding box
    ymin float y-coordinate of the left of the bounding box
    xmax float x-coordinate of the bottom of the bounding box
    ymax float y-coordinate of the right of the bounding box

    Example of bounding box object

    {
    "xmin": 0.1,
    "ymin" : 0.2,
    "xmax" : 0.8,
    "ymax": 0.9
    }

    Errors

    The deepomatic API uses the following error codes:

    Error Code Meaning
    400 Bad Request -- Your request is badly formatted
    401 Unauthorized -- Your API key is wrong
    403 Forbidden -- You are not authorized to access this endpoint
    404 Not Found -- The specified resource could not be found
    405 Method Not Allowed -- You tried to access a endpoint with an invalid method
    410 Gone -- The resource requested has been removed from our servers
    429 Too Many Requests -- You're requesting too many requests! Slow down!
    500 Internal Server Error -- We had a problem with our server. Try again later.
    503 Service Unavailable -- We're temporarily offline for maintenance. Please try again later.