Article
S3 & API Gateway: The Expensive Architecture Decisions You Can't Undo (Or Can You?)
Jan 11, 2026
9 minute read

By
Andy Van Becelaere
Cloud Architect
Share
Part 2 of 3: The $10K Mistake Series
In Part 1, we tackled CloudFront configuration mistakes that can drain thousands from your AWS budget. But here’s the uncomfortable truth: CloudFront optimizations are the easy wins. They’re configuration changes you can make in an afternoon without touching a single line of code. S3 and API Gateway mistakes are different. They’re often baked into your architecture from day one, and fixing them means refactoring code, updating DNS records, and having awkward conversations with your team about why you chose REST APIs when HTTP APIs would’ve saved thousands of dollars.
The good news? I’ve helped dozens of companies unwind these decisions, and it’s almost never as painful as people think. Let’s talk about the three most expensive S3 and API Gateway anti-patterns I see, and more importantly, how to fix them without rewriting your entire application.
The S3 Transfer Cost Surprise
I was working with a startup that had built a pretty standard web app architecture. A React frontend hosted in S3, served directly to users via an S3 static website endpoint. They’d read that S3 was cheap storage, and it was: they were paying maybe $20/month for storage. But their data transfer costs were $340/month and climbing. When I asked why they weren’t using CloudFront, the lead developer said something I hear all the time: “We didn’t think we needed a CDN. Our users are mostly in the US, and our S3 bucket is in us-east-1. It’s fast enough.”
Here’s the thing nobody tells you about S3 pricing: storage is cheap, but data transfer out to the internet costs $0.09 per GB for the first 10TB. That’s actually more expensive than CloudFront’s data transfer pricing, which starts at $0.085 per GB and gets cheaper with volume discounts. But here’s the real kicker: data transfer from S3 to CloudFront is completely free. Zero. Nada. You’re literally paying more to serve content directly from S3 than you would to put CloudFront in front of it.
I pulled their usage data. They were serving about 4TB of data per month: mostly images, JavaScript bundles, and CSS files. At $0.09 per GB, that’s $360 in S3 data transfer costs. If we put CloudFront in front with even a modest 70% cache hit ratio, we’d be looking at about 1.2TB of S3-to-CloudFront transfer (free) and 4TB of CloudFront-to-users transfer at $0.085 per GB, which is $340. But wait: with CloudFront compression enabled for their text-based assets, that 4TB would drop to about 2.5TB, bringing the cost down to $212.50.
We set up a CloudFront distribution with their S3 bucket as the origin, configured proper cache behaviors with one-year TTLs for versioned assets, and enabled compression. I also had them implement Origin Access Control so users couldn’t bypass CloudFront and hit S3 directly. This wasn’t just about cost: it was about ensuring they got the caching benefits they were paying for.
The next month, their combined S3 and CloudFront costs came in at $235: down from $360 for S3 alone. That’s $125 in monthly savings, or $1,500 annually. Their page load times improved by 40% because CloudFront’s edge locations were closer to users than their single S3 bucket. And their S3 GET request costs dropped by 85% because CloudFront was serving most requests from cache instead of hitting the origin.
But here’s where it gets interesting. They’d also enabled S3 Transfer Acceleration on their bucket because someone had read it would make uploads faster. Transfer Acceleration adds $0.04 to $0.08 per GB on top of regular transfer costs. They were using it for user profile photo uploads: small files, mostly from users in the US, uploading to a bucket in us-east-1. Transfer Acceleration is designed for large files uploaded from distant geographic regions. For their use case, it was adding 50–90% to their upload costs for maybe a 5% speed improvement that users couldn’t even perceive.
We disabled Transfer Acceleration and saved another $40/month. Total savings from fixing their S3 architecture: $165/month, or nearly $2,000 per year. The migration took about four hours, most of which was testing to make sure we hadn’t broken anything.
The API Gateway “Wrong Type” Tax
About six months ago, I was reviewing the AWS bill for a company that had launched their MVP two years earlier. They’d built their API using API Gateway REST APIs because that’s what all the tutorials showed. It worked great, scaled beautifully, and nobody had thought about it since launch. Their API Gateway bill was $875/month for about 250 million requests.
I asked a simple question: “Do you actually use any REST API-specific features?” The engineering lead looked confused. “What do you mean? It’s just an API.” We went through the list together. API keys? Nope, they used JWT authentication. Request validation? Nope, they validated in their Lambda functions. Usage plans and throttling? Nope, they handled that at the application layer. SDK generation? Nobody even knew that was a feature.
They were paying $3.50 per million requests for REST APIs when HTTP APIs would’ve cost them $1.00 per million requests. That’s a 71% price difference for functionality they weren’t using. At 250 million requests per month, they were spending $875 on REST APIs when HTTP APIs would’ve cost $250. They were burning $625 per month — $7,500 per year: on features they didn’t need.
The migration wasn’t trivial, but it wasn’t as bad as they feared. HTTP APIs support JWT authorizers, CORS, custom domains, and Lambda integrations: basically everything they were actually using. The main differences were in the request/response transformation syntax and some CloudFormation property names. We created a new HTTP API, updated their infrastructure-as-code, ran parallel testing for a week, then cut over during a maintenance window. Total engineering time: about 20 hours spread across two developers.
The savings started immediately. Their API Gateway bill dropped from $875 to $250. But we also discovered something interesting during the migration. They had a handful of endpoints that were serving static content: things like terms of service, privacy policy, and some configuration JSON files. These were going through API Gateway and Lambda, costing them request fees and Lambda invocation charges, when they could’ve been served from S3 and CloudFront for pennies.
We moved those static endpoints to S3, updated the frontend to point to CloudFront URLs instead of API Gateway endpoints, and saved another $30/month. It was a small optimization, but it added up. The total savings from their API Gateway refactor: $655/month, or $7,860 annually.
Here’s the decision tree I now share with every team I work with. If you’re building a simple API with JWT authentication and you don’t need API keys, request validation, or usage plans, start with HTTP APIs. They’re cheaper, faster (lower latency), and simpler to configure. If you later discover you need REST API features, you can migrate: but in my experience, most teams never do. And if you’re serving static content or doing simple redirects, don’t use API Gateway at all. That’s what S3 and CloudFront are for.
The CORS Configuration Nightmare
This one is sneaky because it doesn’t show up as a line item on your bill. It just quietly doubles your API Gateway request count, and most people never notice.
I was helping a company debug some performance issues when I noticed something odd in their CloudWatch metrics. Their API Gateway request count was almost exactly double what their application logs showed. Every single API call from their frontend was generating two requests: an OPTIONS preflight request and the actual GET or POST request.
CORS preflight requests happen when browsers need to verify that a cross-origin request is allowed. The browser sends an OPTIONS request first, gets back the CORS headers, and then sends the actual request. This is normal and necessary for certain types of requests. But here’s the thing: browsers can cache preflight responses if you tell them to. The Access-Control-Max-Age header tells the browser how long to cache the preflight response. Set it to 86400 seconds (24 hours), and the browser only needs to send one preflight per day per endpoint. Set it to 0 or don’t set it at all, and the browser sends a preflight for every single request.
This company wasn’t setting Access-Control-Max-Age at all. Every API call from their frontend triggered a preflight. At 50 million actual requests per month, they were being billed for 100 million requests. On HTTP APIs at $1.00 per million, that’s an extra $50/month. On REST APIs at $3.50 per million, it would’ve been an extra $175/month. They were literally paying double for their API because of a missing header.
The fix was straightforward. We updated their API Gateway responses to include proper CORS headers with a 24-hour max age. For their HTTP API, we used response header policies. For a couple of legacy REST API endpoints, we updated the Lambda integration responses. We also reviewed their frontend code to make sure they weren’t doing anything that forced preflights unnecessarily: things like custom headers that weren’t in the CORS allow list or using PUT/DELETE when POST would work fine.
The impact was immediate. Their API Gateway request count dropped by 48%: not quite 50% because some requests were simple requests that never triggered preflights in the first place. Their API Gateway bill went from $50/month to $26/month. More importantly, their API response times improved by about 30ms on average because they’d eliminated an entire round trip for most requests.
But we found another CORS-related issue that was even more expensive. They were using CloudFront in front of their API Gateway, and they’d configured Lambda@Edge to add CORS headers to every response. Lambda@Edge costs $0.60 per million invocations plus duration charges. They were paying for Lambda@Edge to add headers that API Gateway could’ve added for free, and they were doing it on every single response.
We moved the CORS header logic to CloudFront response header policies, which are free. No Lambda@Edge invocations, no duration charges, same functionality. This saved them another $35/month. The total savings from fixing their CORS configuration: $59/month, or about $700 annually. Not huge in absolute terms, but remember: this was just one misconfiguration. Most companies have several of these issues compounding on each other.
The Architecture Decisions You Can Actually Change
Here’s what I’ve learned from helping companies unwind these architectural mistakes: the sunk cost fallacy is real, and it’s expensive. Teams are reluctant to refactor working code, even when the refactor would save thousands of dollars per year. They tell themselves it’s too risky, too time-consuming, or not worth the engineering effort.
But the math usually tells a different story. If migrating from REST APIs to HTTP APIs takes 20 hours of engineering time and saves $7,500 per year, that’s a 37x return on investment in the first year alone. If setting up CloudFront in front of S3 takes 4 hours and saves $2,000 per year, that’s a 50x ROI. These aren’t marginal optimizations: they’re fundamental architecture improvements that pay for themselves many times over.
The key is to approach these changes systematically. Don’t try to fix everything at once. Start with the highest-impact, lowest-risk changes. Enabling CORS caching is a one-line configuration change with zero risk. Setting up CloudFront in front of S3 is low risk if you test thoroughly. Migrating from REST to HTTP APIs requires more planning, but you can run them in parallel during the transition.
I also recommend tracking your AWS costs by service and setting up CloudWatch alarms for anomalies. If your S3 data transfer costs suddenly spike, you want to know immediately, not when the bill arrives at the end of the month. If your API Gateway request count doubles overnight, that’s a signal that something changed: maybe a frontend bug is causing retry loops, or maybe someone deployed code that’s triggering unnecessary preflights.
The companies that manage their AWS costs effectively treat infrastructure optimization as an ongoing practice, not a one-time project. They review their Cost Explorer monthly. They monitor their cache hit ratios and request patterns. They question architectural decisions that made sense two years ago but might not make sense today. They understand that the cloud is pay-as-you-go, which means every inefficiency costs you money every single day until you fix it.
If you’re looking at your S3 and API Gateway costs right now and feeling that sinking feeling, take a breath. These problems are fixable. Pull up your Cost Explorer and see where the money is going. Check if you’re serving content directly from S3 when CloudFront would be cheaper. Look at your API Gateway types and see if you’re paying for features you don’t use. Review your CORS configuration and make sure you’re caching preflight responses. Each of these fixes might only save you a few hundred dollars per month, but they add up quickly.
In Part 3, we’re going to tackle the operational costs that nobody thinks about until they’re out of control: logging, monitoring, and the hidden expenses that compound over time. We’ll also put together a complete cost-optimized reference architecture that ties everything together. Until then, go audit those S3 and API Gateway configurations. Your infrastructure is probably costing you more than it should, and now you know how to fix it.
Found one of these anti-patterns in your own setup? I’d love to hear how much you saved after fixing it. Drop a comment below, and if you’re dealing with a particularly complex migration scenario, reach out. I’ve probably helped someone through something similar.



