From Kitchen to Cloud: How Sanika’s Restaurant Modernized with AWS Elastic Beanstalk

From Kitchen to Cloud: How Sanika’s Restaurant Modernized with AWS Elastic Beanstalk

Executive:

In today’s digital-first world, even the most beloved restaurants need a robust online presence. Sanika’s Restaurant, known for its warm ambiance and curated culinary experiences, recognized the need to scale its digital operations to meet growing customer expectations. With the help of Cogniv Technologies and AWS, they embarked on a journeyto modernize their infrastructure, streamline deployments, and enhance reliability.

The Challenge: Building a Digital Foundation

Sanika’s team faced several hurdles at the start of their digital journey:

  • Uncertainty around hosting platforms
  • Scalability limitations and potential downtime
  • Lack of automated deployment and monitoring tools
  • Security vulnerabilities due to manual configurations

These challenges threatened to delay their online launch and compromise user experience during peak hours.

The Solution: AWS Elastic Beanstalk + CI/CD Automation

To overcome these obstacles, Cogniv Technologies implemented a fully automated, cloud native solution using AWS Elastic Beanstalk and a CI/CD pipeline

How the Partner Resolved the Customer Challenge

Designed a Fully Automated 3-Tier Architecture 

Utilized AWS CloudFormation to provision infrastructure as code. Included a secure VPC, application layer (Elastic Beanstalk), and database layer (Amazon RDS).

Implemented Secure Networking:

Deployed resources within a custom VPC for isolated and secure network access.Placed the RDS database in private subnets to protect sensitive data.

Deployed Scalable Application Hosting:

Used AWS Elastic Beanstalk to host the Dockerized application. Enabled automatic scaling, load balancing, and health monitoring.

Integrated GitHub for Source Control:

Enabled seamless version control and collaboration. Ensured traceability and transparency in code change

Established CI/CD Pipeline:

Configured AWS CodePipeline and CodeBuild for automated builds and deployments. Enabled continuous integration and delivery with minimal manual intervention.

Streamlined Deployment Workflow:

Automated the entire release process from code commit to production deployment. Reduced downtime and manual errors, ensuring consistent and reliable updates.

Enhanced Operational Efficiency:

Eliminated manual overhead, allowing the team to focus on innovation. Improved deployment speed and system reliability.

DevOps in Action: Practices That Powered the Transformation

Version Control: GitHub integration enabled seamless collaboration and change tracking.

CI/CD Pipelines: Automated deployment workflows ensured rapid, reliable software delivery.

Infrastructure as Code (IaC): AWS CloudFormation enabled consistent, version-controlled environment provisioning.

Technical Results:

– 85% faster deployments  from days to minutes
– Zero downtime during deployments
– 90% reduction in human error through automation
– 100% infrastructure consistency with IaC
– Improved scalability during peak traffic
– Shift from maintenance to innovation for the dev team

Lessons Learned:

– Plan network architecture early to ensure security and scalability.

– Automate everything — from infrastructure to deployments.

– Use managed services to reduce complexity and operational overhead.

– Integrate monitoring and security from the start.

About the Partner: Cogniv Technologies

Cogniv Technologies is an AWS Advanced Tier Partner specializing in:

– Cloud-native architecture

– DevOps and CI/CD automation

– FinOps and cloud cost optimization

Their customer-first approach and deep AWS expertise made them the ideal partner for Sanika’s digital transformation.

Conclusion:

Sanika’s Restaurant successfully transitioned from a manually managed setup to a modern, scalable cloud platform. With AWS and Cogniv Technologies, they now deliver a seamless digital experience that matches the excellence of their in-person dining  setting the stage
for future growth and innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Post

Executive:  

Amazon OpenSearch Service provides robust mechanisms for snapshot backups, which are essential for disaster recovery, migrations, and compliance. While automated snapshots are handled internally by AWS, there are use cases where manual snapshots become necessary to meet custom requirements, especially during cross-cloud migrations or external recovery strategies. 

The Challenge: 

One of our clients faced a limitation with OpenSearch’s default automated snapshot feature. 

Scenario:

The client was preparing to migrate from AWS to Azure, and while OpenSearch automated snapshots were available, they did not meet the granularity or portability required for such a transition. 

Problem: 

Automated snapshots are stored in an AWS-managed repository. Direct access or export of these snapshots outside AWS is not supported. The client needed full control over the snapshots, including the ability to store and manage them in a customer-owned S3 bucket.

The Solution:

To overcome this limitation, Cogniv implemented a manual snapshot strategy using OpenSearch’s snapshot APIs, with the snapshots stored in a custom-managed Amazon S3 bucket. 

  • Full ownership and visibility of backup data 
  • Greater flexibility for cross-cloud migration 
  • The ability to script, schedule, and manage snapshots as needed 

Prerequisites:

  • An S3 bucket must be created in the same AWS region where the OpenSearch domain is provisioned.

  • An EC2 instance should be running in the same VPC as the OpenSearch domain.

  • The EC2 instance must have an IAM role attached with permissions to access S3 and OpenSearch.

  • Access to the Dev Tools section in the OpenSearch Dashboard is required to register and trigger snapshots.

Implementation of manual snapshot:  

Step 1 : Creating a required IAM Role  

Create an IAM role with below two policies   

        Policy Name: PutToOpenSearchAndPassRolePolicy & S3FullAccessToSpecificBucketPolicy

Policy 1 – PutToOpenSearchAndPassRolePolicy                        

Description:
 This policy allows a principal to upload data to an Amazon OpenSearch Service domain using HTTP PUT and to pass a designated IAM role. This is typically used in scenarios where a service needs to write logs or data to OpenSearch and assumes a role for execution. 

 This policy is for interacting with an Amazon OpenSearch domain and passing an IAM role:
{ 

    “Version”: “2012-10-17”, 

    “Statement”: [ 

        { 

            “Effect”: “Allow”, 

            “Action”: “iam:PassRole”, 

            “Resource”: “arn:aws:iam::<account-id>:role/<role-name>” 

        }, 

        { 

            “Effect”: “Allow”, 

            “Action”: “es:ESHttpPut”, 

            “Resource”: [ 

                “arn:aws:es:<region>:<account-id>:domain/<domain-name>/*” 

            ] 

        } 

    ] 

} 

Policy-2 : S3FullAccessToSpecificBucketPolicy 

Description:
Grants permission to list, read, upload, and delete objects in a specific Amazon S3 bucket. This is commonly used for services or applications that manage file storage within a single bucket. 

{ 

    “Version”: “2012-10-17”, 

    “Statement”: [ 

        { 

            “Effect”: “Allow”, 

            “Action”: [ 

                “s3:ListBucket” 

            ], 

            “Resource”: [ 

                “arn:aws:s3:::<bucket-name>” 

            ] 

        }, 

        { 

            “Effect”: “Allow”, 

            “Action”: [ 

                “s3:GetObject”, 

                “s3:PutObject”, 

                “s3:DeleteObject” 

            ], 

            “Resource”: [ 

                “arn:aws:s3:::<bucket-name>/*” 

            ] 

        } 

    ] 

} 

            

 Step 2 : Update the trust policy of the created IAM role 

{ 

    “Version”: “2012-10-17”, 

    “Statement”: [ 

        { 

            “Sid”: “”, 

            “Effect”: “Allow”, 

            “Principal”: { 

                “Service”: [ 

                    “es.amazonaws.com”, 

                    “ec2.amazonaws.com” 

                ] 

            }, 

            “Action”: “sts:AssumeRole” 

        } 

    ] 

} 

Step 3 : Mapping IAM Role ARN in OpenSearch Dashboard 

    To allow OpenSearch to use the IAM role for snapshot operations, map the created IAM Role ARN to the relevant security roles within the OpenSearch Dashboard. 

Navigation Path: Menu → Management → Security → Roles 

Instructions: 

Map to manage_snapshots Role:

  1. In the Roles view, search for the role named manage_snapshots[predefined role]

     2. Select the role and locate the section titled Mapped users

3. Click Edit and add the ARN of the IAM role created for snapshot access. 

Map to all_access Role: 

  • Similarly, search for the role named all_access. 
  • Select the role and locate both the Mapped users and Backend roles sections. 
  • Click Edit for each section and add the same IAM Role ARN. 
  • Save the changes.

This mapping ensures that the IAM role has the necessary permissions to interact with OpenSearch and manage snapshot operations via the REST API or CLI.

Step 4 : Create a s3 bucket on which the manual snapshot should be stored and keep the bucket as public 

Step 5 : Register OpenSearch Snapshot Repository via Python 

This guide describes how to install the required dependencies and execute a Python script to register a snapshot repository in the OpenSearch Dashboard via its REST API.

Installation Commands :  

Run the following commands on your EC2 or server to set up the environment: 

sudo yum install -y python3 

sudo yum install -y python3-pip  

pip3 install boto3 

pip3 install requests 

pip3 install requests_aws4auth 

Script :  

This script uses the boto3, requests, and requests_aws4auth libraries to register an S3 snapshot repository in Amazon OpenSearch Service. 

import boto3 

import requests 

from requests_aws4auth import AWS4Auth 

 host = ‘https://<your-opensearch-domain-endpoint>/ 

region = ‘<aws-region>’                               

service = ‘es’ 

credentials = boto3.Session().get_credentials() 

awsauth = AWS4Auth( 

    credentials.access_key, 

    credentials.secret_key, 

    region, 

    service, 

    session_token=credentials.token 

) 

 path = ‘/_snapshot/<repository-name>’  

url = host + path 

 payload = { 

    “type”: “s3”, 

    “settings”: { 

        “bucket”: “<your-s3-bucket-name>”,                  

        “region”: “<aws-region>”,                           

        “role_arn”: “arn:aws:iam::<account-id>: role/<role-name>”   

    } 

} 

headers = {“Content-Type”: “application/json”} 

r = requests.put(url, auth=awsauth, json=payload, headers=headers) 

 print(r.status_code) 

print(r.text)

Run this script to register OpenSearch snapshot Repository 

Step 6 : Execute a Manual Snapshot Backup 

To initiate a manual snapshot backup of your OpenSearch indices, use the following REST API command from the OpenSearch Dashboard (Dev Tools )

Command Syntax:  

PUT /_snapshot/<your-snapshot-repo-name>/<snapshot-name> 

To check the status, use the following command and run it in the dev tools 
 
Command Syntax:

GET /_snapshot/_status 

After running these command manual snapshot will be created and stored in the created s3 bucket 

In the fast-evolving world of EdTech, agility, scalability, and reliability are non-negotiable. Vidysea, a platform dedicated to guiding professionals through their higher education journey, recognized this early. As their user base expanded and application complexity grew, they needed a robust infrastructure to match their ambitions. Here’s how they transformed their architecture with the help of AWS and Cogniv Technologies.

The Challenge: From Static Hosting to Scalable Infrastructure

Initially hosted on Vercel and later AWS Amplify, Vidysea faced growing pains:
– Performance bottlenecks during peak usage
– Limited autoscaling for backend services
– Manual infrastructure management leading to downtime
– Deployment inefficiencies slowing down feature delivery

These challenges highlighted the need for a more scalable, resilient, and automated solution.

The Solution: Migrating to Amazon EKS with Cogniv Technologies

To meet these demands, Vidysea partnered with Cogniv Technologies, an AWS Advanced Tier Partner, to re-architect their platform using Amazon Elastic Kubernetes Service (EKS).

Key Components of the Solution:

Microservices Architecture: Transitioned from monolithic to containerized microservices using Docker and EKS.
CI/CD Pipelines: Automated deployments with GitHub-integrated pipelines for faster, error-free releases.
Container Registry: Used Amazon ECR for secure and scalable image storage.
Database Management: Leveraged Amazon RDS for PostgreSQL to ensure high availability and simplified scaling.
Security & Monitoring: Integrated AWS WAF, CloudWatch, and ALB for robust security and observability

Solution and AWS Architecture Design:

Cogniv Technologies collaborated closely with Vidysea to devise a customized product strategy aligned with their DevOps goals. Harnessing AWS’s best practices, we architected a resilient infrastructure to support Vidysea’ microservices architecture and facilitate future enhancements.

 

Key DevOps Practices Implemented:

Version Control

  • Integrated GitHub for managing application code.
  • Enabled seamless collaboration and meticulous change tracking.
  • Ensured transparency and accountability throughout development.

Continuous Integration (CI) and Continuous Delivery (CD)

  • Established CI/CD pipelines for automated build, package, and deployment.
  • Enabled rapid and reliable delivery of microservices to Amazon EKS.
  • Reduced manual errors and accelerated release cycles.

Containerization

  • Adopted Docker for consistent deployment environments.
  • Used Amazon EKS for scalable container orchestration.
  • Stored Docker images in AWS Elastic Container Registry (ECR) for simplified deployment and environment consistency.

AWS Services in Action

                               Service                                          Role
                  Amazon EKS Orchestrates containerized workloads with high availability
                  Amazon RDS Manages PostgreSQL databases with automated backups
                  Amazon S3 Stores application assets and backups securely
                  Amazon EC2 Provides scalable compute resources
                  AWS WAF Protects against common web exploits
                  Amazon ECR Hosts Docker images for seamless deployment
                  CloudWatch Monitors application health and performance

Technical Results

The transformation delivered measurable improvements:
– 90% reduction in deployment time
– 2x increase in deployment frequency (up to 10/day)
– 75% less manual intervention, reducing human error
– 20 hours/week saved per engineer
– 60% faster time-to-market for new features

Lessons Learned

– Containerization is key: Moving to Docker and EKS enabled better scalability and fault isolation.
– Automation accelerates innovation: CI/CD pipelines drastically improved deployment speed and reliability.
– Cloud-native architecture pays off: The shift to microservices empowered Vidysea to respond faster to user needs and security threats.

Cost Comparison: Amplify vs. EKS

 

Setup Monthly Cost Annual Cost
AWS Amplify (Legacy) $74.71
AWS EKS (New) $383.74 $4,604.88

About the Partner: Cogniv Technologies

Cogniv Technologies brings deep AWS expertise, industry knowledge, and a customer-first approach. As an AWS Advanced Tier Partner they specialize in:
– Cloud-native architecture
– DevOps and automation
– FinOps and cost optimization

Conclusion

Vidysea’s journey from static hosting to a dynamic, microservices-driven platform on AWS EKS is a testament to the power of cloud transformation. With Cogniv Technologies as their guide, they’ve built a future-ready platform that scales with their mission: empowering learners worldwide.