Monitoring latency: Vercel Serverless Function vs Vercel Edge Function
Mar 14, 2024•5 min read
In our previous article, we compared the latency of various cloud providers but did not include Vercel. This article will compare the latency of Vercel Serverless Function with Vercel Edge Function.
We will test a basic Next.js application with the app router. Below is the code for the routes:
We have 4 routes, 3 using the NodeJS runtime and one is using Edge runtime.
/api/ping
is using the NodeJS runtime/api/ping/warm
is using the NodeJS runtime/api/ping/cold
is using the NodeJS runtime/api/ping/edge
is using the Edge runtime
Each route have a different maxDuration
, it's a trick to avoid bundling the
functions in the same physical functions.
Here is the repository of the application.
Vercel Serverless Function - NodeJS runtime
They are using the NodeJS 18 runtime. We have access to all the nodejs API. Our function are deployed in a single location: iad1 - Washington, D.C., USA.
Upgrading to Node.js 20 could enhance cold start performance, but it's still in beta.
We analyzed the header of each request and observe that all requests are processed in a data center near our location before being routed to our serverless location.
ams
->fra1
->iad1
gru
->gru1
->iad1
hkg
->hkg1
->iad1
iad
->iad1
->iad1
jnb
->cpt1
->iad1
syd
->syd1
->iad1
We never encountered a request routed to a different data center, and we never hit the Vercel cache.
Warm - /api/ping/warm
uptime
100
%
fails
0
#
total pings
12,090
#
p50
246
ms
p75
305
ms
p90
442
ms
p95
563
ms
p99
855
ms
We are pinging this functions every 5 minutes to keep it warm.
Cold - /api/ping/cold
uptime
100
%
fails
0
#
total pings
2,010
#
p50
859
ms
p75
933
ms
p90
1,004
ms
p95
1,046
ms
p99
1,156
ms
We are pinging this functions every 30 minutes to ensure the functions will be scaled down.
Cold Roulette - /api/ping
uptime
100
%
fails
0
#
total pings
6,036
#
p50
305
ms
p75
791
ms
p90
914
ms
p95
972
ms
p99
1,086
ms
We are pinging this functions every 10 minutes. It's an inflection point where we never know if the function will be warm or cold.
Vercel Edge Function
Vercel Edge Functions is using the Edge Runtime. They are deployed globally and executed in a datacenter close to the user.
They have limitations compared to the NodeJs runtime, but they have a faster cold start.
We analyzed the request header and found that the X-Vercel-Id
header indicates
the request is processed in a datacenter near the user.
ams
->fra1
gru
->gru1
hkg
->hkg1
iad
->iad1
jnb
->cpt1
syd
->syd1
Edge - /api/ping/edge
uptime
100
%
fails
0
#
total pings
6,042
#
p50
106
ms
p75
124
ms
p90
152
ms
p95
178
ms
p99
328
ms
We are pinging this functions every 10 minutes.
Conclusion
| Runtime | p50 | p95 | p99 | | --------------------- | --- | ----- | ----- | | Serverless Cold Start | 859 | 1,046 | 1,156 | | Serverless Warm | 246 | 563 | 855 | | Edge | 106 | 178 | 328 |
Globablly Edge functions are approximately 9 times faster than Serverless functions during cold starts, but only 2 times faster when the function is warm.
Edge functions have similar latency regardless of the user's location. If you value your users and have a worldwide audience, you should consider Edge Functions.
Create an account on OpenStatus to monitor your API and get notified when your latency increases.