edgeblog/content/index.md

51 lines
2.7 KiB
Markdown
Raw Normal View History

2022-10-02 03:17:53 +00:00
This page is written directly in Markdown - one file for the body,
plus one for the header and one for the footer.
For each HTTP request your browser makes to Markdown@Edge, a Fastly server makes
checks its cache for the relevant Markdown files, and either uses the cached
files as-is or makes a request to my web server at nora.codes to retrieve them.
The WebAssembly code running on that server, compiled from a single Rust file,
then combines them, parses the resulting string as Markdown, and uses a
streaming event-based renderer to convert that Markdown to HTML.
2022-09-21 19:00:19 +00:00
The only dependency of the service, other than the Fastly SDK,
is Raph Levien's [`pulldown-cmark`][pulldown_cmark].
[pulldown_cmark]: https://github.com/raphlinus/pulldown-cmark
2022-10-02 03:17:53 +00:00
In a normal website written with Markdown, this rendering process is done just
once, when converting source Markdown files into the HTML files that are
actually served to your browser. In this case, the rendering has been moved "to
the edge" and is running on a server that is probably physically much closer to
you than my webserver is.
2022-09-21 19:00:19 +00:00
[![A diagram showing the difference in pipeline between normal websites with a CDN and this monstrosity](images/diagram.jpg)](images/diagram.html)
The renderer takes advantage of the fact that Markdown allows embedding raw HTML by embedding
anything with a "text" MIME type in the Markdown source, while passing through anything
without a "text" MIME type - images, binary data, and so forth - unchanged.
2022-10-02 03:17:53 +00:00
That allows the above JPG image to embed properly, while also allowing the
linked SVG image page to be rendered as a component of a Markdown document.
2022-09-21 19:00:19 +00:00
### Why?
I work on the Compute@Edge platform and wanted to get some hands on experience with it.
This is not a good use of the platform for various reasons; among other things,
it buffers all page content in memory for every request, which is ridiculous.
A static site generator like [Hugo][hugo] or [Zola][zola] is an objectively better choice
for bulk rendering, while the C@E layer is better for filtering, editing, and dynamic content.
[hugo]: https://gohugo.io/
[zola]: https://www.getzola.org/
That said, this does demonstrate some interesting properties of the C@E platform.
For instance, the source files are hosted in [a subdirectory][source] of my webserver;
in theory, you could directory-traverse your way into my blog source, but in fact,
you can't.
I also have a [Clacks Overhead][clacks] set for my webserver, and the C@E platform
is set up to make passing through existing headers trivial even with entirely
synthetic responses like these, so that header is preserved.
2022-10-02 03:17:53 +00:00
[source]: https://nora.codes/edgeblog/
2022-09-21 19:00:19 +00:00
[clacks]: https://xclacksoverhead.org/home/about