-
Automatic tagging images and visualization of tags from the images with Amazon Rekognition
-
(2) Visualize the analysis results of taggs automatically extracted from images
- Read this in other languages: English, Korea(한국어)
- API Gateway
- Lambda Function
- Kinesis Data Stream
- Elasticsearch Service
- Rekognition
- S3
-
Request
-
PUT
- /v1/{bucket}/{object}
URL Path parameters Description Required(Yes/No) Data Type bucket s3 bucket name Yes String object s3 object name Yes String -
ex)
- bucket: image-valuts
- object: raw-image/20191101_125236.jpg (Percent-encoding:
raw-image%2F20191101_125236.jpg
)
curl -X PUT "https://t2e7cpvqvu.execute-api.us-east-1.amazonaws.com/v1/image-valuts/raw-image%2F20191101_125236.jpg" \ --data @20191101_125236.jpg
-
-
Response
- No Data
-
Install AWS CDK based on Getting Started With the AWS CDK, and create and register a new IAM User to deploy CDK Stacks into
~/.aws/config
.For example, create the IAM User, cdk_user and register it into
~/.aws/config
.$ cat ~/.aws/config [profile cdk_user] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY region=us-east-1
ℹ️ cdk_user needs
AdministratorAccess
IAM Policy. -
Upload Python Packages for Lambda Layer into S3 bucket
For example, if we are uploadingelasticsearch
python packages to S3 bucket,(1) create the packages
$ python3 -m venv es-lib $ cd es-lib $ source bin/activate (es-lib) $ mkdir -p python_modules (es-lib) $ pip install 'elasticsearch>=7.0.0,<7.11' requests requests-aws4auth -t python_modules (es-lib) $ mv python_modules python (es-lib) $ zip -r es-lib.zip python/ (es-lib) $ aws s3 mb s3://my-bucket-for-lambda-layer-packages (es-lib) $ aws s3 cp es-lib.zip s3://my-bucket-for-lambda-layer-packages/var/ (es-lib) $ deactivate
(2) upload the zipped package file into S3
$ aws s3 ls s3://image-insights-resources/var/ 2019-10-25 08:38:50 0 2019-10-25 08:40:28 1294387 es-lib.zip
Reference
- How to create a Lambda layer using a simulated Lambda environment with Docker
$ cat <<EOF > requirements.txt > elasticsearch>=7.0.0,<7.11 > requests==2.23.0 > requests-aws4auth==0.9 > EOF $ docker run -v "$PWD":/var/task "public.ecr.aws/sam/build-python3.7" /bin/sh -c "pip install -r requirements.txt -t python/lib/python3.7/site-packages/; exit" $ zip -r es-lib.zip python > /dev/null $ aws s3 mb s3://my-bucket-for-lambda-layer-packages $ aws s3 cp es-lib.zip s3://my-bucket-for-lambda-layer-packages/var/
- How to create a Lambda layer using a simulated Lambda environment with Docker
-
Download source code from the git repository, and set up cdk environment
$ git clone https://github.com/aws-samples/aws-realtime-image-analysis.git $ cd aws-realtime-image-analysis $ python3 -m venv .env $ source .env/bin/activate (.env) $ pip install -r requirements.txt
-
Create an IAM User to be allowed to read/write S3, and take a note of both
Access Key Id
andSecrect Key
.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:GetObject*", "s3:ListObject*", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "*" } ] }
-
Set up
cdk.context.json
file like this:{ "image_bucket_name_suffix": "Your-S3-Bucket-Name-Suffix", "lib_bucket_name": "Your-S3-Bucket-Name-Of-Lib", "s3_access_key_id": "Access-Key-Of-Your-IAM-User-Allowed-To-Access-S3", "s3_secret_key": "Secret-Key-Of-Your-IAM-User-Allowed-To-Access-S3" }
For example,
{ "image_bucket_name_suffix": "k7mdeng", "lib_bucket_name": "lambda-layer-resources-use1", "s3_access_key_id": "AKIAIOSFODNN7EXAMPLE", "s3_secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" }
-
Deploy by running
cdk deploy
command(.env) $ cdk --profile=cdk_user deploy --all
-
Make sure Binary Media Types of the image uploader API is correct
-
(Optional) Create ssh key for bastion host.
$ cd ~/.ssh $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (~/.ssh/id_rsa): MY-KEY Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in MY-KEY. Your public key has been saved in MY-KEY.pub. The key fingerprint is: SHA256:NjRiNGM1MzY2NmM5NjY1Yjc5ZDBhNTdmMGU0NzZhZGF The key's randomart image is: +---[RSA 3072]----+ |E*B++o | |B= = .. | |**o + | |B=+ o | |B+=oo . S | |+@o+ o . | |*.+o . | |.oo | | o. | +----[SHA256]-----+ $ ls -1 MY-KEY* MY-KEY MY-KEY.pub
-
(Optional) To access the OpenSearch Cluster, add the ssh tunnel configuration to the ssh config file (
~/.ssh/config
on Mac, Linux) of the personal local PC as follows:# Elasticsearch Tunnel Host estunnel HostName 12.34.56.78 # your server's public IP address User ec2-user # your servers' user name IdentitiesOnly yes IdentityFile ~/.ssh/MY-KEY # your servers' key pair LocalForward 9200 vpc-YOUR-ES-CLUSTER.us-east-1.es.amazonaws.com:443 # your ElasticSearch cluster endpoint
-
(Optional) Send the publick key of ssh to the bastion host.
$ cat send_ssh_publick_key.sh #!/bin/bash - REGION=us-east-1 # Your Region INSTANCE_ID=i-xxxxxxxxxxxxxxxxx # Your Bastion Host Instance Id AVAIL_ZONE=us-east-1a # Your AZ SSH_PUBLIC_KEY=${HOME}/.ssh/MY-KEY.pub # Your SSH Publikc Key location aws ec2-instance-connect send-ssh-public-key \ --region ${REGION} \ --instance-os-user ec2-user \ --instance-id ${INSTANCE_ID} \ --availability-zone ${AVAIL_ZONE} \ --ssh-public-key file://${SSH_PUBLIC_KEY} $ bash send_ssh_publick_key.sh { "RequestId": "af8c63b9-90b3-48a9-9cb5-b242ec2c34ad", "Success": true }
-
(Optional) Run
ssh -N estunnel
in the Terminal.$ ssh -N estunnel
-
(Optional) Connect to
https://localhost:9200/_dashboards/app/login?
in a web browser.- Search:
https://localhost:9200/
- Kibana:
https://localhost:9200/_dashboards/app/login?
- Search:
The lambda function uses the delivery role to sign HTTP (Signature Version 4) requests before sending the data to the Amazon OpenSearch Service endpoint.
You manage Amazon OpenSearch Service fine-grained access control permissions using roles, users, and mappings. This section describes how to create roles and set permissions for the lambda function.
Complete the following steps:
- Connect to
https://localhost:9200/_dashboards/app/login?
in a web browser. - Enter the master user and password that you set up when you created the Amazon OpenSearch Service endpoint. The user and password is stored in the AWS Secrets Manager as a name such as
OpenSearchMasterUserSecret1-xxxxxxxxxxxx
. - In the Welcome screen, click the toolbar icon to the left side of Home button. Choose Security.
- Under Security, choose Roles.
- Choose Create role.
- Name your role; for example,
firehose_role
. - For cluster permissions, add
cluster_composite_ops
andcluster_monitor
. - Under Index permissions, choose Index Patterns and enter index-name*; for example,
retail-trans*
. - Under Permissions, add three action groups:
crud
,create_index
, andmanage
. - Choose Create.
In the next step, you map the IAM role that the lambda function uses to the role you just created.
-
Choose Manage mapping and under Backend roles,
-
For Backend Roles, enter the IAM ARN of the role the lambda function uses:
arn:aws:iam::123456789012:role/UpsertToESServiceRole709-xxxxxxxxxxxx
. -
Choose Map.
Note: After OpenSearch Role mapping for the lambda function, you would not be supposed to meet a data delivery failure with the lambda function like this:
[ERROR] AuthorizationException: AuthorizationException(403, 'security_exception', 'no permissions for [cluster:monitor/main] and User [name=arn:aws:iam::123456789012:role/UpsertToESServiceRole709-G1RQVRG80CQY, backend_roles=[arn:aws:iam::123456789012:role/UpsertToESServiceRole709-G1RQVRG80CQY], requestedTenant=null]')
-
Connect to
https://localhost:9200/_dashboards/app/login?
in a web browser (Chrome, Firefox etc). -
Enter the master user and password that you set up when you created the Amazon OpenSearch Service endpoint. The user and password is stored in the AWS Secrets Manager as a name such as
OpenSearchMasterUserSecret1-xxxxxxxxxxxx
. -
In Kibana toolbar, select
Managemant > Index Patterns
menu, and createIndex Pattern
(e.g.,: image_insights) -
Choose
Visualize
menu and create graphs
(a) Image Count(b) Tag Cloud
(c) Tag Count
(d) Tag Pie Chart
-
Make a dashboard out of the above graphs
-
Upload images through APIs with Postman
Delete the CloudFormation stack by running the below command.
(.env) $ cdk --profile cdk_user destroy --all
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.