Skip to main content
Hitachi Vantara Knowledge

Sample Java application

This section of the Help contains a sample Java application that uses the Hitachi API for Amazon S3 and the Amazon S3 SDK to perform a series of operations in HCP.


The application makes these assumptions:

  • The HCPsystem has a tenant named europe.
  • The tenant has a user account with username lgreen and password p4ssw0rd. The sample application uses the credentials for this account to access HCP.
  • By default, versioning is disabled for new buckets.
  • The local file system has folders named input and output that are located in the current working folder for the application.
  • The input folder contains two files, Q4_2019.ppt and Q3_2019.ppt.

What the application does

The sample application shown in this section uses the Hitachi API for Amazon S3 to:

  1. Create a bucket named finance in the context of the tenant named europe (the service point)
  2. List the buckets for the europe tenant that are owned by the user lgreen
  3. Add an ACL to the finance bucket
  4. Store an object named quarterly_rpts/Q4_2019.ppt in the finance bucket, associating custom metadata with the object in the process
  5. Store an object named quarterly_rpts/Q3_2019.ppt in the finance bucket
  6. Retrieve the object named quarterly_rpts/Q4_2019.ppt and write its content to a new file on the local file system
  7. Add an ACL to the object named quarterly_rpts/Q4_2019.ppt
  8. Check whether the content of the object named quarterly_rpts/Q3_2019.ppt has changed and, if it has, retrieve the object and write its content to a new file on the local file system
  9. Delete the quarterly_rpts/Q4_2019.ppt and quarterly_rpts/Q3_2019.ppt objects from the finance bucket
  10. Delete the quarterly_rpts folder from the finance bucket (HCP created this folder automatically when the first object was stored)
  11. Delete the finance bucket

Required libraries

To run the sample application presented in this appendix, you need to have installed these Java libraries:

Java application

Here’s the sample Java application.

* This sample Java application shows how to use the Hitachi API for Amazon S3,
* which is compatible with Amazon S3. The application uses the Amazon S3 SDK.
package com.hds.hcp.examples;


import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.auth.BasicAWSCredentials;

public class HS3SampleApp {

     * @param args
    public static void main(String[] args) {

         * Initialize access credentials for the S3 compatible API client.
        // base64 of HCP user name: "lgreen"
        String accessKey = "bGdyZWVu";
        // md5 of HCP user password: "p4ssw0rd"
        String secretKey = "2a9d119df47ff993b662a8ef36f9ea20";

         * Set up the client configuration to allow for 200 max HTTP
         * connections, as this is an HCP best practice.
         ClientConfiguration myClientConfig = new ClientConfiguration();

         * By default, AWS SDK uses the HTTPS protocol and validates
         * certificates with a certificate authority. The default
         * certificates installed in HCP are self-signed. If these
         * self-signed certificates are used, certificate validation
         * will need to be disabled.
         System.setProperty("com.amazonaws.sdk.disableCertChecking", "true");

         * Build the hs3Client to be used for communication with HCP.
        AmazonS3 hs3Client = new AmazonS3Client(
                                     new BasicAWSCredentials(accessKey,
                                           secretKey), myClientConfig);

        // Set up the service point to be the tenant in HCP.

         * Now that the hs3Client is created for HCP usage, proceed with some
         * operations.
        String bucketName = "finance";

        try {
             * Create a new bucket. With HCP, the bucket name does not need
             * to be globally unique. It needs to be unique only within the HCP
             * service point (that is, the HCP tenant).
            System.out.println("Creating bucket " + bucketName + "\n");

             * List the buckets you own at the service point.
            for (Bucket bucket : hs3Client.listBuckets()) {
                System.out.println(" * " + bucket.getName());

             * Add an ACL to the bucket to give read to a user with the
             * specified user ID.
            AccessControlList bucketACL = hs3Client.getBucketAcl(bucketName);
            new CanonicalGrantee("7370bb2d-033c-4f05-863e-35a4eaf1d739"),
                                 Permission.Read );
            hs3Client.setBucketAcl(bucketName, bucketACL);

             * Upload a couple of objects to the bucket from files on the local
             * file system.
            String objectNamePrefix = "quarterly_rpts/";

            // Setup metadata for first object
            String firstFileName = "input/Q4_2019.ppt";
            ObjectMetadata metadata = new ObjectMetadata();
            metadata.addUserMetadata("Author", "P.D. Gray");
            metadata.addUserMetadata("Audit_Date", "2020-02-23");
            // Content-Length must be set because the application  will use an
            // InputStream during the PUT. Otherwise, the whole file would be
            // will be read into memory, which could cause the application to
            // run out of memory.
                                   new File(firstFileName).length());

            System.out.println("Uploading first object to HCP from a file\n");
            String firstObjectName = objectNamePrefix + "Q4_2019.ppt";
            hs3Client.putObject(new PutObjectRequest(
                                                  new FileInputStream(

            // Write a second object without metadata. Also collect its ETag for
            // later usage.
            System.out.println("Uploading second object to HCP from a file\n");
            String secondObjectName = objectNamePrefix + "Q3_2019.ppt";
            PutObjectResult result = hs3Client.putObject(
                                            new PutObjectRequest(
                                                  new File(
            String secondObjectEtag = result.getETag();

             * List objects in the bucket with prefix quarterly_rpts/Q.
             * The bucket listing is limited to 1,000 items per request.
             * Be sure to check whether the returned listing has been
             * truncated. If it has, retrieve additional results by using
             * the AmazonS3.listNextBatchOfObjects(...) operation.
             ObjectListing objectListing = hs3Client.listObjects(
                                             new ListObjectsRequest()
                                                               + "Q"));
            for (S3ObjectSummary objectSummary
                  : objectListing.getObjectSummaries()) {
                System.out.println(" * " + objectSummary.getKey() + " " +
                                   "(size = " + objectSummary.getSize() + ")");

             * Download an object. When you download an object, you get all
             * the object metadata and a stream from which to read the object
             * content.
            System.out.println("Downloading the first object\n");

            S3Object firstObject = hs3Client.getObject(
                                              new GetObjectRequest(bucketName,

            // Write the content to a file named Q4_2019.ppt in the
            // output folder.
           S3ObjectInputStream responseStream
                                   = firstObject.getObjectContent();
            FileOutputStream dataFile
                = new FileOutputStream("output/Q4_2019.ppt");

            // Keep reading bytes until the end of stream is reached.
            byte buffer[] = new byte[2048];
            int readSize;
            while (-1 != (readSize = {
                dataFile.write(buffer, 0, readSize);


             * Add an ACL to the first object to give full control to the user
             * with the username rsilver. HCP will look up the user ID based
             * on the username.
            AccessControlList objectACL = hs3Client.getObjectAcl(bucketName,
            objectACL.grantPermission(new EmailAddressGrantee("rsilver"),
            hs3Client.setObjectAcl(bucketName, firstObjectName, objectACL);

             * Perform a conditional download of object. This will get the
             * object only if it doesn't match the ETag we received when
             * storing the object.
            System.out.println("Checking the second object");
            GetObjectRequest conditionalRequest
                = new GetObjectRequest(bucketName, secondObjectName)
            S3Object conditionalObject
                                    = hs3Client.getObject(conditionalRequest);
            if (null == conditionalObject) {
                System.out.println(" The object did not change; not "
                  + "downloaded.\n");
            } else {
                // The object has changed, download it to a new file.

                    " The object changed; downloading new revision\n");

                S3ObjectInputStream refreshResponseStream
                                        = conditionalObject.getObjectContent();
                FileOutputStream dataFile2
                                   = new FileOutputStream(

                // Keep reading bytes until the end of stream is reached.
                byte readBuffer[] = new byte[2048];
                int conditionalReadSize;
                while (-1 != (conditionalReadSize
                                    = {
                    dataFile2.write(readBuffer, 0, conditionalReadSize);

             * Delete the objects.
                "Deleting the objects created by this sample application\n");
            hs3Client.deleteObject(bucketName, firstObjectName);
            hs3Client.deleteObject(bucketName, secondObjectName);

             * Delete the folder.
            "Deleting the folder created when the first object was stored\n");
            hs3Client.deleteObject(bucketName, objectNamePrefix);

             * Delete the bucket.
            System.out.println("Deleting the finance bucket\n");

        } catch (AmazonServiceException ase) {
                "Caught an AmazonServiceException, which means the request "
                    + "made it to HCP but was rejected for some reason.");
            System.out.println("Error Message: " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code: " + ase.getErrorCode());
            System.out.println("Error Type: " + ase.getErrorType());
            System.out.println("Request ID: " + ase.getRequestId());
        } catch (AmazonClientException ace) {
                "Caught an AmazonClientException, which means the client "
                    + " encountered a serious internal problem while trying "
                    + " to communicate with HCP through the S3 compatible API,"
                    + " such as not being able to access the network.");
            System.out.println("Error Message: " + ace.getMessage());
        } catch (IOException ioe) {
               "Caught an IOException while trying to create an object or read "
                    + "from an internal buffer.");
            System.out.println("Error Message: " + ioe.getMessage());


  • Was this article helpful?