Skip To Content

Move a geoprocessing service's directories to an Amazon S3 bucket

At ArcGIS Server 10.7, administrators can change the jobs directory, along with the input and output directories, from a disk location to an Amazon S3 bucket. If your geoprocessing services consistently have large outputs, you can use an Amazon S3 bucket to scale your storage resources. Your geoprocessing services may be slower with this configuration.

Note:

A result map image layer of a geoprocessing service is not available with this configuration.

You can set up this cloud storage configuration using an access key or an IAM role using the steps described below.

Prepare the Amazon Web Services (AWS) environment

For synchronous geoprocessing services, only the Amazon S3 service is required. For asynchronous services, you need to provide ArcGIS Server access to DynamoDB and Simple Queue Service. Your Amazon S3 bucket, DynamoDB table, and Simple Queue Service queue must be in the same region.

  1. Create a bucket in Amazon S3.

    See Create your first S3 bucket for more information.

  2. Optionally, create a table with serviceId as the partition key, and jobId as the sort key.

    Both keys are in string format. If you don't provide the keys, ArcGIS Server automatically creates a table.

  3. For each asynchronous geoprocessing service, create a unique standard type queue in the Simple Queue Service.

Use an access key

To use an access key to connect to Amazon S3 bucket, complete the following steps:

Note:

If the geoprocessing service produces a file or a raster output, the access key will be part of the URL for resulting file or raster outputs, the JSON schema file, and help URLs. You must determine whether this conforms to the security policy of your organization.

ArcGIS Server also needs the secret key for the access key during the data store item registration process. However, ArcGIS Server will always encrypt your secret key.

  1. Sign in to ArcGIS Server Administrator Directory and browse to Register Item.
  2. Provide connection information for the Amazon S3 bucket and DynamoDB, including the access key and secret key.

    Sample JSON to register a data item in ArcGIS REST API

    {
        "path": "/cloudStores/<name of your cloud store>",
        "type": "cloudStore",
        "provider": "amazon",
        "info": {
            "isManaged": false,
            "connectionString": {
                "accessKeyId":"<your access key>",
                 "secretAccessKey":"<your secret key>",
                 "region":"<your region>",
                 "defaultEndpointsProtocol":"https",
                 "credentialType":"accesskey"
                 },
            "objectStore": "<name of your S3 bucket>/<optional folder name>",
            "dynamoDbTable":"<dynamoDB table name>"
        }
    }

  3. Return to the home page of the Administrator Directory, and click Services.
  4. Locate the geoprocessing service that you want to use the AWS environment, click the service name, and click edit.
  5. In the JSON representation of the service, add the following key pairs with a new unique serviceId value, the name of your cloud store, and the queue for that service:

    Asynchronous geoprocessing configuration

    {
      "serviceId": "<a unique service ID>",
      "jobQueueStore":"/cloudStores/<name of your cloud store>",
      "jobTableStore": "/cloudStores/<name of your cloud store>",
      "outputStore": "/cloudStores/<name of your cloud store>",
      "jobObjectStore": "/cloudStores/<name of your cloud store>",
      "jobsStoreQueue": "<name of the queue>"
    }

    Synchronous geoprocessing configuration

    {
      "serviceId": "<a unique service ID>",
      "outputStore": "/cloudStores/<name of your cloud store>",
      "jobObjectStore": "/cloudStores/<name of your cloud store>"
    }
    Tip:

    The <name of the cloud store> value is at the end of its data item URL endpoint in the Administrator Directory.

  6. Click Save Edits to confirm the configuration.

    The geoprocessing service automatically restarts, which takes a moment.

  7. Repeat steps 4 through 6 for any additional services.

JSON sample

Change the service properties JSON of your geoprocessing service by adding key value pairs required in step 5 above.

Edit GPServer. This is a truncated JSON for asynchronous geoprocessing service configuration.

{
  "serviceName": "myGPService1",
  "resultMapServer": "false",
  "maximumRecords": "1000",
  "virtualOutputDir": "/rest/directories/arcgisoutput", 

  "serviceId": "<this_is_a_unique_serviceid>",
  "jobQueueStore":"/cloudStores/<name of your cloud store>",
  "jobTableStore": "/cloudStores/<name of your cloud store>",
  "outputStore": "/cloudStores/<name of your cloud store>",
  "jobObjectStore": "/cloudStores/<name of your cloud store>",
  "jobsStoreQueue": "<this_is_a_unique_queue_name>", 

  "portalURL": "https://domain/webadaptor/"
 },
 "portalProperties": "",
 "extensions": "",
 "frameworkProperties": {},
 "datasets": []
}

Set up with IAM role

To use an AWS IAM role, you must have a deployment in the same region as your Amazon S3 bucket., DynamoDB table, and Simple Queue Service. See ArcGIS Enterprise on Amazon Web Services for more information.

Ensure that your IAM roles have simple queue service policies in place, in addition to the IAM policies for ArcGIS Enterprise on Amazon Web Services.

The configuration steps are the same as above, other than the data store registration at step 2. Use the sample JSON below for step 2.

Sample JSON to register a data item in ArcGIS REST API using IAM role

{
    "path": "/cloudStores/<name of your cloud store>",
    "type": "cloudStore",
    "provider": "amazon",
    "info": {
        "isManaged": false,
        "connectionString": {
             "region":"<your region>",
             "defaultEndpointsProtocol":"https",
             "credentialType":"iamrole"
             },
        "objectStore": "<name of your S3 bucket>/<optional folder name>",
        "dynamoDbTable":"<dynamoDB table name>"
    }
}