Quick Start Guide
Add AWS resources to your application as easily as installing npm packages. This guide will walk you through installing the Hereya CLI, authenticating with Hereya Cloud, bootstrapping your AWS account, and enhancing a simple web application with AWS S3 storage capabilities - all with just a few commands.
Prerequisites
Section titled “Prerequisites”- Node.js 20 or higher - Hereya CLI requires Node.js 20+
- AWS Account - Hereya is designed for AWS infrastructure management
- AWS CLI configured - Required for AWS resource provisioning
Verify your Node.js version:
node --version# Should output v20.0.0 or higher1. Installing Hereya CLI
Section titled “1. Installing Hereya CLI”Install Hereya CLI globally using your preferred package manager:
npm install -g hereya-cliyarn global add hereya-clipnpm add -g hereya-cliVerify the installation:
hereya --versionLogin to Hereya Cloud
Section titled “Login to Hereya Cloud”After installing the CLI, authenticate with Hereya Cloud. Hereya Cloud stores metadata for your projects and workspaces, and provides access to the registry of packages.
hereya loginThis command will:
- Open your web browser to authenticate with Hereya Cloud
- Store your authentication credentials locally
- Give you access to the Hereya package registry
- Enable project and workspace metadata synchronization
Bootstrap AWS Resources
Section titled “Bootstrap AWS Resources”Hereya is specifically designed for AWS infrastructure management. Before creating your first project, you need to bootstrap Hereya for AWS:
hereya bootstrap awsThis command sets up the necessary AWS resources that Hereya uses to:
- Store and manage infrastructure state
- Track resource deployments across workspaces
- Coordinate AWS CloudFormation stacks
- Manage resource dependencies
The bootstrap process creates:
- An S3 bucket for storing Hereya’s infrastructure state
- IAM roles and policies for managing resources
- CloudFormation stack for Hereya’s core infrastructure
Note: You only need to run this once per AWS account/region. The bootstrap resources are shared across all projects that use Hereya.
2. Create a Sample Application
Section titled “2. Create a Sample Application”Let’s create a simple web application that we’ll enhance with AWS resources using Hereya. Choose your preferred language:
Create a Node.js Express Application
- Create a new directory and initialize the project:
mkdir hello-hereyacd hello-hereyanpm init -y- Install Express:
npm install express- Set the package type to module:
npm pkg set type="module"- Create the application file
app.js:
import express from 'express';
const app = express();const port = process.env.PORT || 3000;
// Middlewareapp.use(express.json());
// Health check endpointapp.get('/health', (req, res) => { res.json({ status: 'healthy' });});
// Main endpointapp.get('/', (req, res) => { res.send('Hello from Hereya!');});
// S3 Upload endpoint placeholderapp.post('/upload', (req, res) => { // TODO: Implement S3 upload using AWS SDK const bucketName = process.env.bucketName; const region = process.env.awsRegion;
res.json({ message: 'Upload endpoint ready - S3 integration coming soon', bucket: bucketName || 'Not configured', region: region || 'Not configured' });});
// List files endpoint placeholderapp.get('/files', (req, res) => { // TODO: Implement S3 list objects using AWS SDK res.json({ message: 'List files endpoint ready - S3 integration coming soon', files: [] });});
app.listen(port, () => { console.log(`Server running at http://localhost:${port}`);
// Log S3 configuration if available if (process.env.bucketName) { console.log(`S3 Bucket configured: ${process.env.bucketName}`); }});- Update package.json to add start scripts:
{ "name": "hello-hereya", "version": "1.0.0", "type": "module", "main": "app.js", "scripts": { "start": "node app.js", "dev": "node app.js" }, "dependencies": { "express": "^4.18.0" }}- Initialize Hereya in your project:
# First, create a workspace (environment)hereya workspace create dev
# Then initialize the projecthereya init hello-hereya -w devCreate a Go HTTP Server
- Create a new directory and initialize the Go module:
mkdir hello-hereyacd hello-hereyago mod init hello-hereya- Create the application file
main.go:
package main
import ( "encoding/json" "fmt" "log" "net/http" "os")
type HealthResponse struct { Status string `json:"status"`}
type UploadResponse struct { Message string `json:"message"` Bucket string `json:"bucket"` Region string `json:"region"`}
type FilesResponse struct { Message string `json:"message"` Files []string `json:"files"`}
func healthHandler(w http.ResponseWriter, r *http.Request) { response := HealthResponse{Status: "healthy"} w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response)}
func mainHandler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello from Hereya!")}
func uploadHandler(w http.ResponseWriter, r *http.Request) { if r.Method != "POST" { http.Error(w, "Method not allowed", http.StatusMethodNotAllowed) return }
// TODO: Implement S3 upload using AWS SDK bucketName := os.Getenv("bucketName") region := os.Getenv("awsRegion")
response := UploadResponse{ Message: "Upload endpoint ready - S3 integration coming soon", Bucket: bucketName, Region: region, }
if bucketName == "" { response.Bucket = "Not configured" } if region == "" { response.Region = "Not configured" }
w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response)}
func listFilesHandler(w http.ResponseWriter, r *http.Request) { // TODO: Implement S3 list objects using AWS SDK response := FilesResponse{ Message: "List files endpoint ready - S3 integration coming soon", Files: []string{}, }
w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response)}
func main() { // Get port from environment or use default port := os.Getenv("PORT") if port == "" { port = "3000" }
// Get S3 configuration from environment bucketName := os.Getenv("bucketName") region := os.Getenv("awsRegion")
// Set up routes http.HandleFunc("/health", healthHandler) http.HandleFunc("/", mainHandler) http.HandleFunc("/upload", uploadHandler) http.HandleFunc("/files", listFilesHandler)
// Start server log.Printf("Server starting on port %s\n", port)
// Log S3 configuration if available if bucketName != "" { log.Printf("S3 Bucket configured: %s (Region: %s)\n", bucketName, region) }
if err := http.ListenAndServe(":"+port, nil); err != nil { log.Fatal(err) }}- Initialize Hereya in your project:
# First, create a workspace (environment)hereya workspace create dev
# Then initialize the projecthereya init hello-hereya -w devCreate a Spring Boot Application
- Create a new directory and set up the project:
mkdir hello-hereyacd hello-hereya- Create a Maven
pom.xml(or use Spring Initializr):
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion>
<parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.3.0</version> </parent>
<groupId>com.example</groupId> <artifactId>hello-hereya</artifactId> <version>1.0.0</version>
<properties> <java.version>21</java.version> </properties>
<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies>
<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build></project>- Create the application structure:
mkdir -p src/main/java/com/example/hereya- Create the main application class
src/main/java/com/example/hereya/Application.java:
package com.example.hereya;
import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication;import org.springframework.beans.factory.annotation.Value;import org.springframework.web.bind.annotation.GetMapping;import org.springframework.web.bind.annotation.PostMapping;import org.springframework.web.bind.annotation.RestController;import jakarta.annotation.PostConstruct;import java.util.List;import java.util.Map;
@SpringBootApplication@RestControllerpublic class Application {
@Value("${bucketName:}") private String bucketName;
@Value("${awsRegion:}") private String region;
public static void main(String[] args) { SpringApplication.run(Application.class, args); }
@GetMapping("/") public String hello() { return "Hello from Hereya!"; }
@GetMapping("/health") public Map<String, String> health() { return Map.of("status", "healthy"); }
@PostMapping("/upload") public Map<String, String> upload() { // TODO: Implement S3 upload using AWS SDK return Map.of( "message", "Upload endpoint ready - S3 integration coming soon", "bucket", bucketName.isEmpty() ? "Not configured" : bucketName, "region", region.isEmpty() ? "Not configured" : region ); }
@GetMapping("/files") public Map<String, Object> listFiles() { // TODO: Implement S3 list objects using AWS SDK return Map.of( "message", "List files endpoint ready - S3 integration coming soon", "files", List.of() ); }
@PostConstruct public void init() { // Log S3 configuration if available if (!bucketName.isEmpty()) { System.out.println("S3 Bucket configured: " + bucketName + " (Region: " + region + ")"); } }}- Create
application.propertiesinsrc/main/resources/:
server.port=${PORT:3000}- Initialize Hereya in your project:
# First, create a workspace (environment)hereya workspace create dev
# Then initialize the projecthereya init hello-hereya -w devUnderstanding the Hereya Configuration
Section titled “Understanding the Hereya Configuration”After initialization, Hereya creates a hereya.yaml file in your project root, similar to how npm creates a package.json:
project: hello-hereyaworkspace: devThis file tracks:
- project: Your project identifier in Hereya Cloud
- workspace: The current environment for your application
- packages: AWS resources added to your project (like dependencies in package.json)
What are Workspaces?
Section titled “What are Workspaces?”Workspaces in Hereya represent different environments for your application. You can name them anything you like - common examples include:
- dev - Development environment
- staging - Staging/testing environment
- production - Production environment
- test - Testing environment
- demo - Demo environment
- feature-xyz - Feature branch environment
Each workspace maintains its own:
- Infrastructure state
- Environment variables
- AWS resources
- Configuration parameters
Test Your Application
Section titled “Test Your Application”Before adding AWS resources, verify your application runs correctly:
npm start# Visit http://localhost:3000go run main.go# Visit http://localhost:3000mvn spring-boot:run# Visit http://localhost:3000You should see “Hello from Hereya!” when visiting the root URL, and a health check response at /health.
3. Adding AWS S3 and Running Your Application
Section titled “3. Adding AWS S3 and Running Your Application”Now let’s add an S3 bucket to your application, just like you would add an npm package. This demonstrates how Hereya makes AWS resources as easy to manage as code dependencies.
Adding the S3 Bucket Package
Section titled “Adding the S3 Bucket Package”# Add an S3 bucket package configured for developmenthereya add aws/s3bucket -p "namePrefix=myapp" -p "autoDeleteObjects=true"Parameters:
namePrefix- A prefix for the bucket name (default: “hereya”). The actual bucket name will be generated as{namePrefix}-{random-suffix}to ensure global uniquenessautoDeleteObjects- When set totrue, all objects in the bucket will be automatically deleted when you runhereya down(default:false). For this quick start, we’re usingtrueto make cleanup easier. ⚠️ Important: In production, you should leave this asfalse(the default) to prevent accidental data loss.
After adding the package, your hereya.yaml will look like:
project: hello-hereyaworkspace: devpackages: aws/s3bucket: version: "0.1.1"The package parameters are stored separately in hereyaconfig/hereyavars/aws--s3bucket.yaml:
namePrefix: myappautoDeleteObjects: trueUsing Environment Variables
Section titled “Using Environment Variables”The following environment variables are now available:
bucketName- The generated bucket name (e.g.,myapp-abc123)awsRegion- The AWS region where the bucket was creatediamPolicyAwsS3Bucket- IAM policy document for bucket access (JSON)
View all environment variables available in your workspace:
hereya envThis will display something like:
bucketName=myapp-abc123awsRegion=us-east-1iamPolicyAwsS3Bucket={"Version":"2012-10-17","Statement":[...]}# ... other environment variablesThese environment variables are automatically injected when you run your application with Hereya.
Running Your Application with Hereya
Section titled “Running Your Application with Hereya”Run your application with Hereya to automatically inject the AWS environment variables:
hereya run -- npm starthereya run -- go run main.gohereya run -- mvn spring-boot:runYour application now has access to the S3 bucket configuration! Test the endpoints:
Check the welcome message:
curl http://localhost:3000/Check S3 configuration status:
curl -X POST http://localhost:3000/uploadCheck files endpoint placeholder:
curl http://localhost:3000/filesThe upload and files endpoints will show your S3 bucket name and region, confirming the environment variables are properly configured.
4. Implementing S3 Upload
Section titled “4. Implementing S3 Upload”Now let’s implement the actual S3 functionality. Choose your language below for the complete implementation:
Install Dependencies
npm install @aws-sdk/client-s3 multerUpdate Your Application
Replace your app.js with this complete implementation. The highlighted lines show the new S3 functionality:
import express from 'express';import multer from 'multer';import { S3Client, PutObjectCommand, ListObjectsV2Command } from '@aws-sdk/client-s3';
const app = express();const port = process.env.PORT || 3000;
// Configure multer for memory storageconst upload = multer({ storage: multer.memoryStorage(), limits: { fileSize: 10 * 1024 * 1024 // 10MB limit }});
// Initialize S3 clientconst s3Client = new S3Client({ region: process.env.awsRegion || 'us-east-1'});
// Middlewareapp.use(express.json());
// Health check endpointapp.get('/health', (req, res) => { res.json({ status: 'healthy' });});
// Main endpointapp.get('/', (req, res) => { res.send('Hello from Hereya!');});
// S3 Upload endpointapp.post('/upload', upload.single('file'), async (req, res) => { const bucketName = process.env.bucketName;
if (!bucketName) { return res.status(500).json({ error: 'S3 bucket not configured' }); }
if (!req.file) { return res.status(400).json({ error: 'No file provided' }); }
try { const key = `uploads/${Date.now()}-${req.file.originalname}`;
const command = new PutObjectCommand({ Bucket: bucketName, Key: key, Body: req.file.buffer, ContentType: req.file.mimetype });
await s3Client.send(command);
res.json({ message: 'File uploaded successfully', bucket: bucketName, key: key, size: req.file.size }); } catch (error) { console.error('Upload error:', error); res.status(500).json({ error: 'Failed to upload file', details: error.message }); }});
// List files endpointapp.get('/files', async (req, res) => { const bucketName = process.env.bucketName;
if (!bucketName) { return res.status(500).json({ error: 'S3 bucket not configured' }); }
try { const command = new ListObjectsV2Command({ Bucket: bucketName, Prefix: 'uploads/', MaxKeys: 100 });
const response = await s3Client.send(command);
const files = (response.Contents || []).map(item => ({ key: item.Key, size: item.Size, lastModified: item.LastModified }));
res.json({ bucket: bucketName, count: files.length, files: files }); } catch (error) { console.error('List error:', error); res.status(500).json({ error: 'Failed to list files', details: error.message }); }});
app.listen(port, () => { console.log(`Server running at http://localhost:${port}`);
if (process.env.bucketName) { console.log(`S3 Bucket configured: ${process.env.bucketName}`); } else { console.log('Warning: bucketName not configured'); }});Install Dependencies
go get github.com/aws/aws-sdk-go-v2/configgo get github.com/aws/aws-sdk-go-v2/service/s3go get github.com/aws/aws-sdk-go-v2/feature/s3/managerUpdate Your Application
Replace your main.go with this complete implementation:
package main
import ( "bytes" "context" "encoding/json" "fmt" "io" "log" "net/http" "os" "time"
"github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/feature/s3/manager")
type HealthResponse struct { Status string `json:"status"`}
type UploadResponse struct { Message string `json:"message"` Bucket string `json:"bucket"` Key string `json:"key"` Size int64 `json:"size"`}
type ErrorResponse struct { Error string `json:"error"` Details string `json:"details,omitempty"`}
type FileInfo struct { Key string `json:"key"` Size int64 `json:"size"` LastModified time.Time `json:"lastModified"`}
type FilesResponse struct { Bucket string `json:"bucket"` Count int `json:"count"` Files []FileInfo `json:"files"`}
var s3Client *s3.Clientvar s3Uploader *manager.Uploader
func initAWS() { cfg, err := config.LoadDefaultConfig(context.TODO()) if err != nil { log.Printf("Warning: Unable to load AWS config: %v", err) }
s3Client = s3.NewFromConfig(cfg) s3Uploader = manager.NewUploader(s3Client)}
func healthHandler(w http.ResponseWriter, r *http.Request) { response := HealthResponse{Status: "healthy"} w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response)}
func mainHandler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello from Hereya!")}
func uploadHandler(w http.ResponseWriter, r *http.Request) { if r.Method != "POST" { http.Error(w, "Method not allowed", http.StatusMethodNotAllowed) return }
bucketName := os.Getenv("bucketName") if bucketName == "" { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusInternalServerError) json.NewEncoder(w).Encode(ErrorResponse{Error: "S3 bucket not configured"}) return }
// Parse multipart form (10MB max) err := r.ParseMultipartForm(10 << 20) if err != nil { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusBadRequest) json.NewEncoder(w).Encode(ErrorResponse{Error: "Failed to parse form"}) return }
file, header, err := r.FormFile("file") if err != nil { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusBadRequest) json.NewEncoder(w).Encode(ErrorResponse{Error: "No file provided"}) return } defer file.Close()
// Read file content fileBytes, err := io.ReadAll(file) if err != nil { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusInternalServerError) json.NewEncoder(w).Encode(ErrorResponse{Error: "Failed to read file"}) return }
// Generate S3 key key := fmt.Sprintf("uploads/%d-%s", time.Now().Unix(), header.Filename)
// Upload to S3 _, err = s3Uploader.Upload(context.TODO(), &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(key), Body: bytes.NewReader(fileBytes), })
if err != nil { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusInternalServerError) json.NewEncoder(w).Encode(ErrorResponse{ Error: "Failed to upload file", Details: err.Error(), }) return }
response := UploadResponse{ Message: "File uploaded successfully", Bucket: bucketName, Key: key, Size: header.Size, }
w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response)}
func listFilesHandler(w http.ResponseWriter, r *http.Request) { bucketName := os.Getenv("bucketName") if bucketName == "" { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusInternalServerError) json.NewEncoder(w).Encode(ErrorResponse{Error: "S3 bucket not configured"}) return }
// List objects from S3 result, err := s3Client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{ Bucket: aws.String(bucketName), Prefix: aws.String("uploads/"), MaxKeys: aws.Int32(100), })
if err != nil { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusInternalServerError) json.NewEncoder(w).Encode(ErrorResponse{ Error: "Failed to list files", Details: err.Error(), }) return }
files := make([]FileInfo, 0) for _, item := range result.Contents { files = append(files, FileInfo{ Key: *item.Key, Size: *item.Size, LastModified: *item.LastModified, }) }
response := FilesResponse{ Bucket: bucketName, Count: len(files), Files: files, }
w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response)}
func main() { // Initialize AWS initAWS()
// Get port from environment or use default port := os.Getenv("PORT") if port == "" { port = "3000" }
// Get S3 configuration from environment bucketName := os.Getenv("bucketName") region := os.Getenv("awsRegion")
// Set up routes http.HandleFunc("/health", healthHandler) http.HandleFunc("/", mainHandler) http.HandleFunc("/upload", uploadHandler) http.HandleFunc("/files", listFilesHandler)
// Start server log.Printf("Server starting on port %s\n", port)
if bucketName != "" { log.Printf("S3 Bucket configured: %s (Region: %s)\n", bucketName, region) } else { log.Printf("Warning: bucketName not configured") }
if err := http.ListenAndServe(":"+port, nil); err != nil { log.Fatal(err) }}Update Dependencies
Add these dependencies to your pom.xml:
<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>s3</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>sso</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>ssooidc</artifactId> <version>2.20.0</version> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>sts</artifactId> <version>2.20.0</version> </dependency></dependencies>Update application.properties
Update your src/main/resources/application.properties with:
server.port=${PORT:3000}s3.bucketName=${bucketName:}s3.awsRegion=${awsRegion:}Update Your Application
Replace your Application.java with this complete implementation:
package com.example.hereya;
import java.io.IOException;import java.util.List;import java.util.Map;import java.util.stream.Collectors;
import org.springframework.beans.factory.annotation.Value;import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication;import org.springframework.http.HttpStatus;import org.springframework.http.ResponseEntity;import org.springframework.web.bind.annotation.GetMapping;import org.springframework.web.bind.annotation.PostMapping;import org.springframework.web.bind.annotation.RequestParam;import org.springframework.web.bind.annotation.RestController;import org.springframework.web.multipart.MultipartFile;
import jakarta.annotation.PostConstruct;import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;import software.amazon.awssdk.core.sync.RequestBody;import software.amazon.awssdk.regions.Region;import software.amazon.awssdk.services.s3.S3Client;import software.amazon.awssdk.services.s3.model.ListObjectsV2Request;import software.amazon.awssdk.services.s3.model.ListObjectsV2Response;import software.amazon.awssdk.services.s3.model.PutObjectRequest;import software.amazon.awssdk.services.s3.model.S3Exception;
@SpringBootApplication@RestControllerpublic class Application {
@Value("${s3.bucketName:}") private String bucketName;
@Value("${s3.awsRegion:}") private String region;
private S3Client s3Client;
public static void main(String[] args) { SpringApplication.run(Application.class, args); }
@GetMapping("/") public String hello() { return "Hello from Hereya!"; }
@GetMapping("/health") public Map<String, String> health() { return Map.of("status", "healthy"); }
@PostMapping("/upload") public ResponseEntity<?> upload(@RequestParam("file") MultipartFile file) { if (bucketName.isEmpty()) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(Map.of("error", "S3 bucket not configured")); }
if (file.isEmpty()) { return ResponseEntity.badRequest() .body(Map.of("error", "No file provided")); }
try { String key = "uploads/" + System.currentTimeMillis() + "-" + file.getOriginalFilename();
PutObjectRequest putObjectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(key) .contentType(file.getContentType()) .build();
s3Client.putObject(putObjectRequest, RequestBody.fromBytes(file.getBytes()));
return ResponseEntity.ok(Map.of( "message", "File uploaded successfully", "bucket", bucketName, "key", key, "size", file.getSize() )); } catch (IOException e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(Map.of( "error", "Failed to upload file", "details", e.getMessage() )); } catch (S3Exception e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(Map.of( "error", "S3 operation failed", "details", e.getMessage() )); } }
@GetMapping("/files") public ResponseEntity<?> listFiles() { if (bucketName.isEmpty()) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(Map.of("error", "S3 bucket not configured")); }
try { ListObjectsV2Request listRequest = ListObjectsV2Request.builder() .bucket(bucketName) .prefix("uploads/") .maxKeys(100) .build();
ListObjectsV2Response listResponse = s3Client.listObjectsV2(listRequest);
List<Map<String, Object>> files = listResponse.contents().stream() .map(s3Object -> Map.<String, Object>of( "key", s3Object.key(), "size", s3Object.size(), "lastModified", s3Object.lastModified() )) .collect(Collectors.toList());
return ResponseEntity.ok(Map.of( "bucket", bucketName, "count", files.size(), "files", files )); } catch (S3Exception e) { return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(Map.of( "error", "Failed to list files", "details", e.getMessage() )); } }
@PostConstruct public void init() { if (!region.isEmpty()) { this.s3Client = S3Client.builder() .region(Region.of(region)) .credentialsProvider(DefaultCredentialsProvider.create()) .build(); }
if (!bucketName.isEmpty()) { System.out.println("S3 Bucket configured: " + bucketName + " (Region: " + region + ")"); } else { System.out.println("Warning: bucketName not configured"); } }}Testing Your S3 Integration
Section titled “Testing Your S3 Integration”Run Your Application with Hereya
Section titled “Run Your Application with Hereya”hereya run -- npm starthereya run -- go run main.gohereya run -- mvn spring-boot:runUpload a Test File
Section titled “Upload a Test File”Create a test file and upload it using curl:
# Create a test fileecho 'Hello from Hereya S3 integration!' > test.txt
# Upload the filecurl -X POST http://localhost:3000/upload \ -F "file=@test.txt" \ -H "Accept: application/json"You should see a response like:
{ "message": "File uploaded successfully", "bucket": "myapp-abc123", "key": "uploads/1234567890-test.txt", "size": 34}List Uploaded Files
Section titled “List Uploaded Files”curl http://localhost:3000/filesYou should see your uploaded files:
{ "bucket": "myapp-abc123", "count": 1, "files": [ { "key": "uploads/1234567890-test.txt", "size": 34, "lastModified": "2024-01-15T10:30:00Z" } ]}Verify in AWS Console
Section titled “Verify in AWS Console”- Open the AWS S3 Console
- Find your bucket (named as you specified)
- Navigate to the
uploads/folder - You should see your uploaded files
If you encounter any issues with your S3 integration, see the Troubleshooting section at the end of this guide.
5. Deploying to AWS AppRunner
Section titled “5. Deploying to AWS AppRunner”Now that you have a working application with S3 integration, let’s deploy it to AWS AppRunner - a fully managed container service that automatically scales your application based on traffic.
Understanding Deployment Workspaces and Profiles
Section titled “Understanding Deployment Workspaces and Profiles”Hereya distinguishes between regular workspaces (like our dev workspace) and deployment workspaces. Deployment workspaces are specifically designed to provision and manage cloud infrastructure for hosting your applications.
Key concepts:
- Deployment packages: Special packages (like
aws/apprunner) that provide hosting infrastructure, stored in thedeploysection ofhereya.yaml - Profiles: Configuration tags that allow different settings per environment (e.g., different bucket names for dev vs staging)
- Deployment flag (
-d): Required when creating workspaces intended for deployment
Creating a Staging Deployment Workspace
Section titled “Creating a Staging Deployment Workspace”Create a new workspace specifically for staging deployments:
hereya workspace create staging -d --profile=stagingThis command:
- Creates a workspace named
staging - Marks it as a deployment workspace with the
-dflag (required forhereya deploy) - Assigns the
stagingprofile for configuration segregation
Adding the AWS AppRunner Deployment Package
Section titled “Adding the AWS AppRunner Deployment Package”Add the AWS AppRunner package to enable container deployment:
hereya add aws/apprunnerAfter adding the deployment package, your hereya.yaml will look like:
project: hello-hereyaworkspace: devpackages: aws/s3bucket: version: "0.3.0"deploy: aws/apprunner: version: "0.3.0"Notice how deployment packages are stored in the deploy section, separate from regular packages. Deployment packages are only provisioned when you run hereya deploy.
Creating a Dockerfile
Section titled “Creating a Dockerfile”AWS AppRunner requires a Dockerfile to containerize your application for deployment. Create a Dockerfile in your project root directory with the appropriate configuration for your language:
# Build stageFROM node:22-alpine AS builder
WORKDIR /app
# Copy package filesCOPY package*.json ./
# Install dependenciesRUN npm ci --only=production
# Production stageFROM node:22-alpine
WORKDIR /app
# Copy dependencies from builderCOPY --from=builder /app/node_modules ./node_modules
# Copy application codeCOPY package*.json ./COPY app.js ./
# Create non-root userRUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001
# Change ownershipRUN chown -R nodejs:nodejs /app
USER nodejs
# Expose portEXPOSE 3000
# Start the applicationCMD ["npm", "start"]This Dockerfile:
- Uses multi-stage build for optimized image size
- Installs only production dependencies in a separate build stage
- Runs the application as a non-root user for enhanced security
- Copies only necessary files (
app.jsand package files) - Starts the application with
npm start
# Build stageFROM golang:1.25.1-alpine AS builder
WORKDIR /app
# Copy go mod filesCOPY go.mod go.sum ./
# Download dependenciesRUN go mod download
# Copy source codeCOPY . .
# Build the binaryRUN CGO_ENABLED=0 GOOS=linux go build -o main .
# Runtime stageFROM alpine:latest
# Install ca-certificates for HTTPS requestsRUN apk --no-cache add ca-certificates
WORKDIR /app
# Create non-root userRUN addgroup -g 1001 -S appuser && \ adduser -S appuser -u 1001
# Copy the binary from builder stageCOPY --from=builder /app/main .
# Change ownershipRUN chown -R appuser:appuser /app
USER appuser
# Expose the portEXPOSE 3000
# Run the binaryCMD ["./main"]This multi-stage Dockerfile:
- Uses a build stage with full Go toolchain to compile your application
- Creates a minimal runtime image with just the compiled binary
- Runs the application as a non-root user for enhanced security
- Results in a much smaller and more secure container image
# Build stageFROM maven:3.9-eclipse-temurin-21-alpine AS builder
WORKDIR /app
# Copy POM fileCOPY pom.xml .
# Download dependencies (for better layer caching)RUN mvn dependency:go-offline
# Copy source codeCOPY src ./src
# Build the applicationRUN mvn clean package -DskipTests
# Runtime stageFROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Create non-root userRUN addgroup -g 1001 -S appuser && \ adduser -S appuser -u 1001
# Copy the built JAR from builder stageCOPY --from=builder /app/target/hello-hereya-1.0.0.jar app.jar
# Change ownershipRUN chown -R appuser:appuser /app
USER appuser
# Expose the portEXPOSE 3000
# Run the applicationCMD ["java", "-jar", "app.jar"]This multi-stage Dockerfile:
- Uses Maven to build your Spring Boot application in the first stage
- Creates a runtime image with just the JRE and your compiled JAR
- Runs the application as a non-root user for enhanced security
- Optimizes layer caching by downloading dependencies before copying source code
Configuring Profile-Specific Settings
Section titled “Configuring Profile-Specific Settings”Profiles allow you to use different configurations for different environments. Update your S3 bucket configuration to use a different name prefix for staging.
Edit hereyaconfig/hereyavars/aws--s3bucket.yaml:
namePrefix: myappautoDeleteObjects: "true"
---profile: stagingnamePrefix: hello-stagingThe --- separator creates profile-specific configuration. When deploying to the staging workspace (which uses the staging profile), Hereya will:
- Use
hello-stagingas the bucket name prefix instead ofmyapp - Create a bucket like
hello-staging-abc123instead ofmyapp-abc123
Deploying Your Application
Section titled “Deploying Your Application”Deploy your application to AWS AppRunner:
hereya deploy -w stagingThis command will:
- Provision regular packages in the staging workspace (S3 bucket with staging-specific configuration)
- Provision deployment packages (AWS AppRunner service)
- Build and deploy your application to the AppRunner service
- Configure environment variables so your deployed app can access the staging S3 bucket
The deployment process will show progress indicators and output the deployed application URL when complete.
Testing Your Deployed Application
Section titled “Testing Your Deployed Application”Once deployment is complete, you’ll receive a URL from the deployment output. Enter it below to automatically update all testing commands with your actual URL:
Test your deployed application using the commands below:
Check the welcome message:
curl https://your-app.region.awsapprunner.com/Test the health endpoint:
curl https://your-app.region.awsapprunner.com/healthUpload a file to staging S3:
# Create a test file for stagingecho 'Hello from staging deployment!' > staging-test.txt
# Upload to the deployed applicationcurl -X POST https://your-app.region.awsapprunner.com/upload \ -F "file=@staging-test.txt" \ -H "Accept: application/json"List files in staging S3:
curl https://your-app.region.awsapprunner.com/filesYou should see that your staging deployment is using the hello-staging-xxx bucket, confirming that profile-specific configuration is working correctly.
Managing Your Deployment
Section titled “Managing Your Deployment”Updating your deployment: After making code changes, redeploy with the same command:
hereya deploy -w stagingViewing environment variables in staging:
hereya env -w stagingYour application is now deployed and accessible from anywhere on the internet, with its own dedicated S3 bucket for staging data!
6. Cleaning Up Resources
Section titled “6. Cleaning Up Resources”When you’re done experimenting, you can clean up the resources to avoid ongoing AWS charges. Hereya provides different commands for cleaning up deployments vs development resources.
Remove the Staging Deployment
Section titled “Remove the Staging Deployment”To remove your staging deployment (AppRunner service and staging S3 bucket):
hereya undeploy -w stagingThis command will:
- Remove the AWS AppRunner service
- Delete the staging S3 bucket and all uploaded files (since we used
autoDeleteObjects=true) - Clean up all staging deployment resources
- Keep your deployment configuration in
hereya.yamlintact
Remove Development Resources
Section titled “Remove Development Resources”To remove your development resources (the dev workspace S3 bucket):
hereya downThis command will:
- Delete the dev workspace S3 bucket and all uploaded files
- Clean up the development AWS resources
- Keep your
hereya.yamlconfiguration intact
Re-provisioning Resources
Section titled “Re-provisioning Resources”Since your configurations are saved in hereya.yaml, you can recreate resources anytime:
Re-deploy to staging:
hereya deploy -w stagingRe-provision dev resources:
hereya upVerifying Cleanup
Section titled “Verifying Cleanup”After cleanup:
- Check the AWS S3 Console to verify bucket deletions
- Check the AWS AppRunner Console to verify service removal
- Run
hereya env -w stagingandhereya envto confirm environment variables are cleared
Troubleshooting
Section titled “Troubleshooting”S3 Integration Issues
Section titled “S3 Integration Issues”AWS Credentials Error
Section titled “AWS Credentials Error”If you see “Unable to locate credentials”:
- Ensure AWS CLI is configured:
aws configure - Or set environment variables:
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEY
Bucket Access Denied
Section titled “Bucket Access Denied”If you get permission errors:
- Check your AWS IAM user has S3 permissions
- Ensure the bucket name matches exactly
File Size Limits
Section titled “File Size Limits”- The examples limit uploads to 10MB
- Adjust the limits in the code for larger files
- For very large files, consider using multipart uploads
Installation Issues
Section titled “Installation Issues”Node.js version error
Section titled “Node.js version error”If you see an error about Node.js version, you need to upgrade to Node.js 20 or higher:
- Download from nodejs.org
- Or use your system’s package manager
- Verify with
node --versionafter installation
Permission errors during global install
Section titled “Permission errors during global install”If you encounter permission errors:
# For npmsudo npm install -g hereya-cli
# Or configure npm to use a different directorynpm config set prefix '~/.npm-global'export PATH=~/.npm-global/bin:$PATHAWS credentials not configured
Section titled “AWS credentials not configured”If you haven’t configured AWS credentials yet:
aws configure# Enter your AWS Access Key ID, Secret Access Key, and region