677

Frosted glass from games to the web

I really like the aestectics, even if physically wrong at the edges. Thanks for sharing the details.

As a embedded developer, I feel this is kind of wasteful. Every client computes an "expensive" blur filter, over an over again? Just for blending to a blurred version of the background image?

I know - this is using the GPU, this is optimized. In the end, this should not be much. (is it really?)

<rant> I feel the general trend with current web development is too much bloat. Simple sites take 5 seconds to load? Heavy lifting on the client? </rant>... but not the authors fault

a day agoschobi

I guess everybody has their own preconceptions of what's wasteful.

I grew up in the era of 14.4k modems, so I'm used to thinking that network bandwidth is many, many orders of magnitude more scarce and valuable than CPU time.

To me, it's wasteful to download an entire image over the Internet if you can easily compute it on the client.

Think about all the systems you're activating along the way to download that image: routers, servers, even a disk somewhere far away (if it's not cached on the server)... All that just to avoid one pass of processing on data you already had in RAM on the client.

a day agopavlov

"Mips – processing cycles, computer power – had always been cheaper than bandwidth. The computers got cheaper by the week and the phone bills stayed high by the month." - The Star Fraction, 1995

a day agogary_0

each visitor brings their own cpu to do this work whereas the server bandwidth is finite

a day agogfody

I'm confused though.

If the goal is to optimize for server bandwidth, wouldn't you still want to send the already-blurred photo? Surely that will be a smaller image size than the pre-blurred full res photo (while also reducing client-side CPU/OS requirements).

21 hours agocj

We don’t know the aspect ratio of the client window before-hand and on web, there are a lot of possibilities! So if any pre-blurred image is meant to peek out around the edges, those edge widths are dynamic. Otherwise, a low-res blurred image plus high-res non-blurred edges might be less bandwidth if overhead is low enough.

19 hours agopitched

I have the same perspective regarding bandwidth, but I also consider any client to be running on a computer at least ten years old and at least three OS revisions behind.

I like to consider myself a guest on a client CPU, GPU, and RAM. I should not eat all their food, leave an unflushed turd in their toilet, and hog the remote control. Be a thoughtful guest that encourages feelings of inviting me back in the future.

Load fast, even when cell coverage is marginal. Low memory so a system doesn't grind to a halt from swapping. Animate judiciously because it's polite. Good algorithms, because everyone notices when their cursor becomes jerky.

21 hours agottfkam

Okay but how do you compute an image? How would your browser -- or any other client software -- know what's the hero image of a blog that you never visited before, for example?

I feel like I am missing something important in your comment.

a day agopdimitar

The article describes computational method of rendering frosted glass effect. You can achieve the same thing by rendering the effect once (then upload to a sever) and have client download the rendered image. Or you can compute the frosted glass effect. What's better? That's the argument.

21 hours agohighwind

It's like people forgot what graceful degradation and progressive enhancement is.

17 hours agothoughtpalette

Ah, sorry, I didn't make it as far in the article.

IMO it really depends on the numbers. I'd be OK if my client downloads 50KB extra data for the already-rendered image but I'll also agree that from 100KB and above it is kind of wasteful and should be computed.

With the modern computing devices we all have -- including 3rd world countries, where a cheap Android phone can still do a lot -- I'd say we should default to computation.

21 hours agopdimitar

Most of those websites that are technically "wasteful" in some ways, are way more "wasteful" when you realize what we use them for. Mostly it's for pure entertainment.

So either entertainment is wasteful, or if it's not, spending more compute to make the entertainment better is OK.

a day agovasco

I would say most websites are wasteful wrt the customer, which is usually advertisers. There are websites where the user is the customer, but they’re rare these days.

a day agoklabb3

IMO it is time to seriously realise that most of this "ooh looks cool, surely I/we need that" tech has no place in this world. Whether or not the act itself is wasteful (although it generally is in tech...), the thought process itself indicates a bigger problem with society. Why do we need this thing? Why do we consider being without the thing to be bad? Like seriously, at the scale of issues in society today, who cares if your UI panel is blurred or not?

7 hours agomcdeltat

Isn't sending both the blurred and non-blurred picture over the network the way we did it since two decades in web dev? With (many!) high resolution pictures this is definetly less performant then a local computation, given that real networks have finite bandwiths, in particular mobile clients on spots with bad wireless coverage. It is astonishing what can be done with CSS/WebGL only these days. We needed a lot of hacks and workarounds in the past for that.

a day agoktpsns

A blurred image shouldn't be very much extra over the high resolution image considering it's information content is much smaller.

a day agodjmips

I don't have much data myself but when I was doing scraping some time ago I had thousands of examples where f.ex. the full-res image was something like 1.7MB and the blurred image was in the range of 70KB - 200KB, so more or less 7% - 11% of the original. And I might be lying here (it's been a while) but I believe at least 80% of the blurred images were 80KB or less.

Technically yes you could make some savings but since images were transferred over an HTTP-1.1 Keep-Alive connection, I don't feel it was such a waste.

Would love to get more data if you have it, it's just that from the limited work I did in the area it did not feel very worth of only downloading the high-res image and do the blur yourself... especially in scenarios when you just need the blurred image + dimensions first, in order to prevent the constant annoying visual reflow as images are downloaded -- something _many_ websites suffer from even today.

a day agopdimitar

I would argue that while it _feels_ wasteful to us humans, as we perceive it as a "big recomputation of the rendered graphics", technically it's not.

the redrawing of anything that changes in your ui requires gpu computation anyway, and some simple blur is quite efficient to add. Likely less expensive than any kind of animations of dom objects thar aren't optimized as gpu layers.

additionally, seeing how nowadays the most simple sites tend to load 1+ mb of JS and trackers galore, all eating at your cpu ressources, Id put that bit of blur for aesthetics very far down on the "wasteful" list

a day agokrsdcbl

I generally agree - caveat is for some values of "some simple blur" - the one described in the article is not one in my book.

For reference, for every pixel in the input, we need to average 3x^2 pixels, roughly, where 3 is actually pi and x is the radius.

This blows up quite quickly. Not enough that my $5K MacBook really breaks a sweat with this example. But GPUs are one of the most insidious things a dev can accidentally forget to account for not being so great on other people's devices

18 hours agorefulgentis

As per the central limit theorem one can approximate Gaussian with a repeated convolution with any function, box blur being most obvious candidate here. And box blur can be computed quickly with a summed area table.

a day agoRicoElectrico

> a repeated convolution

I really wonder what's the field of reference of "quickly" there. To me convolution is one of the last resort techniques in signal processing given how expensive it is (O(size of input data * size of convolution kernel)). It's of course still much faster than gaussian blur which is still non-trivial to manage at a barely decent 120fps even on huge Nvidia GPUs but still.

a day agojcelerier

How are we supposed to think about SIMD in Big-O? Because this is still linear time if the kernel width is less than the max SIMD width (which is 16 I think on x64?)

18 hours agopitched

I recently had a shower thought that the bigger you go, more energy you need to do computation. As in you could make a computer out of moving planets. On the other hand you could go small and make a computer out of a tiny particle. Both scales achieve the same result but at very different costs.

19 hours agosmusamashah

There is a sci-fi series that I am absolutely blanking on that features that concept - I remember a few characters each having access to a somewhat godlike ability to manipulate physics, and using it to restructure the universe to create computers to augment their own capabilities - definitely some planetary stuff and some quantum / atomic level stuff.. hmmmm maybe gpt can help

18 hours agomock-possum

would it happen to be "Zones of Thought" by Vernor Vinge?

16 hours agoheatmiser

Ooh no it is not, but I am coincidentally working my way through the third book in that series!

8 hours agomock-possum

I guess eventually it's a trade-off between doing heavy lifting yourself and paying a little more compute and bandwidth, or offloading it to clients and wasting more energy but at lower cost to the developer. I think there are environmental arguments in both directions (more energy spent computing stuff on the client vs more energy sending pre-computed assets over the networks). I'm not sure which is better ultimately - I suppose it varies case-by-case.

a day agopromiseofbeans

First, I really like the effect the author has achieved. It's very pretty.

Now for a bit of whimsy. It's been said that a picture is worth a thousand words. However, a thousand words uses far less bandwidth. What if we go full-tilt down the energy saving path, replace some images with prose to describe them? What would articles and blog posts look like then?

I know it's not practical, and sending actual images saves a lot of time and effort over trying to describe them, but I like the idea of imagining what that kind of web might look like.

a day agodcuthbertson

With a standardized diffusion model on the receiving end, and a starting point image (maybe 16x16 pixels) with a fixed seed, we could send images with tiny amounts of data, with the client deciding the resolution (deciding how much compute to dedicate) as well as whatever local flavor they wanted (display all images in the style of Monet…) bandwidth could be minimized and the user experience deeply customized.

We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code.

Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart.

a day agoK0balt

To consistently generate the same image, we’d all have to agree on a standard model, which I can’t see happening any time soon. They feel more like fonts than code libraries.

18 hours agopitched

I mean, yeah, but here we’re talking about a knowledge based compression standard, so I would assume that a specific model would be chosen.

The interesting thing here is that the model wouldn’t have to be the one that produces the end result, just -a- end result deterministically produced from the specified seed.

That end result could then act as the input to the user custom model which would add the user specific adjustments, but presumably the input image would be a strong enough influence to guide the end product to be equivalent in meaning if not in style.

Effectively, this could be lossless compression, but only for data that could be produced by a model given a specific prompt and seed, or lossy compression for other data.

It’s a pretty weird idea, but it might make sense if thermodynamic computing or similar tech fulfills its potential to run huge models cheaply and quickly on several orders of magnitude less power (and physical size) than is currently required.

But that will require nand-scale, room temperature thermodynamic wells or die scale micro-cryogenic coolers. Both are a bit of a stretch but only engineering problems rather than out-of-bounds with known physics.

The real question is whether or not thermodynamic wells will be able to scale, and especially whether we can get them working at room temperature.

14 hours agoK0balt

I’m pretty sure the radio on a mobile device consumes more energy than the GPU doing a 2D operation on a single image.

If you want to save energy, send less data.

a day agopavlov

Did my site take > 5 seconds to load?

I put a lot of effort into minimizing content. The images are orders of magnitude larger than the page content but should be async. Other assets barely break 20 kB in total aside from the font (100 kB) which should also load async.

14 hours agotyleo

Tbh I think people radically underestimate how fast, and efficiently so, GPUs are. The Apple Watch has physically based rendering in the UI. It would be curious to compare the actual cost of that versus using a microcontroller to update a framebuffer pushed to a display via SPI.

I did some webgl nonsense like https://luduxia.com/showdown/ and https://luduxia.com/whichwayround/ . This is a experimental custom renderer with DoF, subsurface scattering and lots of other oddities. You are not killed by calculation but memory access, but how to reduce this in blur operations is well understood.

What there is not is semi transparent objects occluding each other, because this becomes a sorting nightmare and you would end up having to resolve a whole lot of dependencies on this dynamically. (Unless you do things with restricting blending modes). Implementing that in the context of widgets that move on a 2D plane with z-index sorting is enormously easier than in a 3D scene though.

a day agofidotron

I extracted a similar "light rays" texture from Windows 7 when I was trying to replicate its aero glass effect on Android to see what can be done with the new shader APIs. I ended up with something very close to this article, except I didn't do the inner shadows but I did increase the saturation of the blurred background a bit.

https://mastodon.social/@grishka/110524476584503201

a day agogrishka

Very cool. The light rays texture can actually do a lot of heavy lifting in the effect. TBH the one I made for the tutorial isn’t even that great. I like the harder lines on that Windows 7 one.

I thought about playing with saturation and I saw some other examples do that. I decided against it. For my article anyways it wouldn’t have mattered as much given that the moon image I used doesn’t have much color. I’d encourage folks making their own glass to play with that though.

a day agotyleo

This is great work, the interactive demo is a good way to show it off too. It’s fun to drag the window over the moon.

That said I dislike the use of frosted glass in user interface design, and feel it was a step backwards for Mac OS when it was added. Designers (hypocritically including myself - I use it too sometimes) are so seduced by it. But in practice it’s never ideal. Especially if the content behind the panel is dynamic or user generated.

It’s the brushed metal of the 2010s, I’m surprised that it leaked out of Windows Vista into everything else!

a day agomrkpdl

Why isn't it "ideal"? Visual design is mostly about personal preferences about what looks good, rather than serving any practical purpose. Yes, design also impacts functionality, but it mostly exists to make the user experience more pleasant.

If you ask me, skeuomorphism makes interfaces more pleasing to use than the minimalistic trend of the past decade+, where everything must be flat. It adds a visual flourish that replicates surfaces and materials we're used to from the real world, making interfaces more familiar and approachable. From a practical standpoint, flat design makes it hard to distinguish elements and their state, whereas when we had 3D buttons, brushed metal and frosted glass, this was very clear.

I think the pendulum is swinging back now, as more designers are shunning the flat look, so hopefully we'll see a resurgence of more human-friendly UIs.

a day agoimiric

I'm not a UI designer so honest question: is the problem the use of aesthetics, like frosted glass or brushed metal, in UI design or is it that in pursuit of presenting such interfaces some designers lost the visual queues needed to indicate where controls are and what they do?

a day agodcuthbertson

My terminal windows have had a slight transparency for years, but now I've been rethinking it

a day agopmarreck

I loved it but once I had to tail a live log of a Docker container with no option to install even 2-3 basic UNIX tools (like `tee` in order to append to a text log) and from that day and on I removed the 20% transparent background image.

Nowadays I have just one terminal program that is just slightly transparent and I have a nice desktop background that is fairly uniform and almost never gets in the way -- but I never use that one for my Neovim or for ssh-ing into remote machines. Only for myself fiddling around with stuff.

Transparency did look cool but once it prevents you from doing your job well once, it's out the door.

a day agopdimitar

Ironically previous versions of macOS also used brushed metal heavily

a day agobowsamic

That's the reference. Brushed metal was a controversial design feature of early Mac OS X, unevenly applied across the built-in applications. (It was kind of supposed to be for media browsers, but then the Finder and Calendar and many others were also brushed metal for some reason.)

It debuted already in 1999 in QuickTime Player, years before Mac OS X:

http://hallofshame.gp.co.at/qtime.htm

The rumor was that Steve Jobs liked this style a lot, so it spread to the operating system despite the widespread external criticism (like the link above).

a day agopavlov

I did a CSS experiment back in 2008 (before CSS allowed for backdrop-blur) that did the same'ish effect: https://webdev.andersriggelsen.dk/aero/

It works by using a fixed-position pre-blurred (with glass effects) background image: https://webdev.andersriggelsen.dk/aero/bgl.jpg

This is a lot more performant than a live gaussian blur but it of course has all the drawbacks of not allowing for a dynamic background image.

a day agoriggsdk

I took it one step further even earlier using a Flash file as a backdrop which was able to achieve the glass effect while still being in a dynamic moveable frame. Can't recall if it used iframes, would need to dig up the code but considering the era it probably did.

a day agoLorin

The floating window doesn't update for me past the initial position. Windows 10 19045.5131. Chrome is up to date Version 131.0.6778.86 (Official Build) (64-bit)

a day agodjmips

That is quite strange Chrome behavior. Seems to only update it's background position when the window size changes. Looks like a Chrome bug to me! Probably some optimization effort that went wrong. I'm quite sure that it worked back in the days in Chrome.

This works as expected in Firefox.

a day agoriggsdk

Cool technical achievement; Bad UX. It's harder to read or understand anything inside a translucent panel than one with a consistent, solid background because of lower and inconsistent contrast with the background. Additionally, things potentially happening underneath will be distracting and disorienting.

a day agocosmotic

"Bad UX" can't be generalized that easily for a simple visual effect, that's highly dependent on the integration.

A good integration would:

- consider how heavily to use the effect to not overload the ui

- query for feature support to avoid adding all additional decorations if blur isn't available

- query for display & video features to avoid rendering the effect on devices that likely don't have a capable gpu

- query for user preferences and serve solid colors with good contrasts to users who might struggle with the busier background

- limit the extent of the effects and shadows depending on available space

- tweak the blurring and opacities to ensure good readability in the specific UI, regarding how complex or contrasted the background under blurred areas will be

- ensure blurs and shadows scale adaptively with the typography to ensure consistent visuals

UX is by definition the design of how a user experiences the complete app and interaction, it's not made or broken by individual stylistic choices

a day agokrsdcbl

Eh, if lower contrast is an issue, make the background or glass tint darker, and the foreground content lighter, or viceversa. Good contrast is a challenge with any design element, regardless if it's using a solid color or this glass effect.

Things happening in the background being distracting and disorienting is also very subjective. You can lower the translucency of the glass just as you would lower the opacity of a solid color.

My point is that your criticism is far from being objectively true. There are ways of integrating this design element without running into those issues. The screenshot from the Forza game looks fine, for example, and having seen it in action, the background is not distracting. And what you gain is a more interesting and visually pleasing UI than if you were using a solid background. This may be more important in a video game, but it can also be done tastefully in OS and web design.

a day agoimiric

Forza is a game, the background being distracting or not isn't a real problem. On Windows 11 however there is some transparency effect here and there and it's indeed very annoying when a window change color because of what is behind it. Thankfully there's a setting to turn that down.

a day agogwervc

Yeah, I don’t use this effect across my site because it’s harder to get a good UX with it.

That being said, it has more of a place in games. Especially in HUD where you don’t want to occlude the background since that’s the main area a user is interacting.

a day agotyleo

It has all the same problems in games as it has on a website, it's just more acceptable to sacrifice usability for 'looking cool' in a game.

a day agocosmotic

In HUD you don’t want to occlude the background since that’s the main area a user is interacting.

a day agocubefox

Disagree; the action behind the glass will make using the HUD more difficult, and the glass will make seeing what's behind it almost impossible. It makes both parts of the experience worse. Although, on the surface, to an onlooker, it might look 'cool'.

20 hours agocosmotic

Windows Vista solved this by giving black text on top of the frosted glass a white shadow-like outline. This guarantees the text is readable even in the extreme case where the background is black. In CSS this effect can be replicated with the text-shadow property.

a day agocubefox

This is not my personal experience. It was very hard for me to read the text on vista glass.

21 hours agocosmotic

Agree. And when iOS started doing it years ago I was disappointed for that reason — loss of contrast. But also you just know there's a (significant?) battery cost to the effect.

a day agoJKCalhoun

It may be entirely done within the GPU, making the energy cost insignificant. At least apple added an accessibility setting to reduce the blur.

a day agocosmotic

Yeah the variable contrast issue (contrast dependent on background content) is a real problem for accessibility guidelines. For enterprise apps if you want to pass the various accessibility reviews it needs to be a user preference at least.

18 hours agokfarr

You have to combine translucent panels with a 80-90% opaque background color if you want to have text on it. Then it's a subtle effect and the text is readable.

a day agocarlosjobim

False. The human brain and eye is built to recognize foreground and background and shift focus and attention based off of blur.

Thousands of years of adaptation has honed the brain and the eye to be optimized for this type of view much moreso then the simple UX you see here on HN.

Not only does the blurred/frosted glass background look better but it should be clearer because that's what we've been designed to do.

a day agoninetyninenine

I don't follow. What exactly has the brain been honed to do?

Every single article I've read on the matter says higher contrast is more readable. The debate is over how high is 'good enough'.

a day agocosmotic

Do those articles use scientific evidence? Do they measure anything quantitative or is it just a bunch of opinion.

UX poses as a scientific field when really there is little evidence based research and it’s all just technical procedure and big words. Technical procedures are illusory they make you think it’s legit with a bunch of rules but to be scientific you need evidence. As technical as the rules are, a lot of it is made up bs.

UX design is one of those bs concepts that litter the world and poses as legitimate. It’s like the food pyramid from the USDA that says refined carbs are the most important part of every meal.

If the debate is on how much contrast then run some science. Instead UX just debates and as a result the entire field is made up conjecture.

a day agoninetyninenine

Yes, the articles are scientific.

https://www.sciencedirect.com/science/article/abs/pii/S01698...

https://www.tandfonline.com/doi/abs/10.1080/0144929041000166...

https://jov.arvojournals.org/article.aspx?articleid=2121593

Though this lacks citations and evidence, it's by a generally accepted expert and authority in the field:

https://www.nngroup.com/articles/low-contrast/

I'm really struggling to understand the connections you're drawing to food.

19 hours agocosmotic

It’s ok if you’re struggling. As long as you are humble enough to admit it.

The food pyramid is based off of cherry picked data and biased experiments influenced the food industry. This is similar to your cherry picked data.

Your data measures low contrast vs high contrast but really you need to measure high contrast vs. blurred background.

13 hours agoninetyninenine

Blurred background is unpredictable contrast, sometimes low, sometimes high. Plus the motion behind it would be distracting. You can see the impact on the static screenshot in the OP where the text is harder to read over the light part of the blurred background than the dark part of the blurred background.

7 hours agocosmotic

This holds true in 3D space to a _certain degree_, but here we’re looking at faux 3D projected onto a flat 2D surface where our eyes’ depth perception doesn’t work.

a day agosimianparrot

No we like photos with blur in the background. We prefer such photography. So who’s to say the faux surface fails?

Again we need evidence based measurements which the entire UX field lacks. It’s just a bunch of made up concepts strung together with little scientific research.

a day agoninetyninenine

The blur is depth dependent, not an uniform filter.

And there’s a lot of research regarding UX, under the term Human-Computer Interaction. The thing is that it easily converge to something like Win 2000, macOS Leopard.

20 hours agoskydhash

A photo isn’t an interactive interface so there’s entirely different perception tasks involved in parsing it. We like a lot of things in photos that are horrible for UI design — and vice versa.

But this also gets into another gray area where looking at a design for a UI != using said design to perform important tasks. Hence why prototyping and user tests often run counter to “pretty” interfaces.

19 hours agosimianparrot

Right but you got any science to back up what you say?

I can simply say you’re wrong and I disagree and you got nothing to move your argument forward.

13 hours agoninetyninenine

Yes, our brain is good at this - but it still takes some capacity in processing to do this. I guess the point is: if you have a simple, high-contrast background - your brain needs less capacity to process it.

a day agoFannon

You got evidence to back that up?

The human body is designed to desire and consume the maximum amount of feel good tasty food for maximum energy but we are finding that the evolution of the human body is not designed to actually accept such massive consumption despite our desire for such food. Our bodies do not handle the highest capacity consumption instead they have narrowly evolved to fill a strangely specific niche.

Same with our eyes. It may seem easier to like high contrast designs but our eyes through millions of years of evolution are not optimized for high contrast signs since those things never existed in nature.

a day agoninetyninenine
[deleted]
a day ago

This has like 10 stacked effects, thats about 7 too many except for the most demanding aesthetics.

Just use the background color + blur + box shadow or border

a day agotln

For folks who wanted a less intensive glass effect, they could pick and choose from the tutorial. You get a pretty respectable piece of glass at step three with just blur and shadow. I'd personally recommend doing that and mixing in a static light ray image. IDT most applications would need the full JavaScript effect. It's fun but it can be distracting.

With blur, shadow, and light rays alone you can already get _really close_ to that Forza image at the top.

21 hours agotyleo

Thanks for the article!

20 hours agotln

Interesting that most of the top comments are talking about wastefulness.

I think that's part of why everyone went to flat design - Windows Vista and glass effects looked great, but they were expensive to compute! Flat designs are aesthetically controversial, and can be more difficult to use than skeuomorphic ones, especially for older users [0][1].

Considering that realism can aid in usability, I think it's totally valid to use effects like this on the web.

[0]: https://www.tandfonline.com/doi/abs/10.1080/0144929X.2020.18...

[1]: https://www.sciencedirect.com/science/article/abs/pii/S01419...

20 hours agoLeftHandPath

What is the reason for the limit of stacked effects being n <= 3? Is it an accessibility issue, performance issue, maintenance issues, or what?

CSS is by far my weakest skill in terms of development, so I am completely unaware of the best/worst practices.

19 hours agohirvi74

Did you miss the bit where the author explains that he is going for the most demanding aesthetics?

21 hours agoAngostura

Interactive examples are nice, but would have been better if there was a side by comparison or toggle for switching to previous iteration. In most of the cases improvement of each iteration is so subtle I couldn't tell a difference.

a day agoKarliss

Agreed, on mobile I can hardly tell the difference after the second step, if at all.

a day agoLorin

Same for me, I don’t see a single difference between the examples. I’ll try again later on my laptop and see if it changes anything.

The first example looks beautiful to me though, I might use it in my next UI.

21 hours agothiht

Quite impressive to be honest. I always enjoy seeing how creative and powerful CSS can be.

Though, I feel like there is some level of understanding of HTML/CSS that I will never be able to grasp -- like this demonstration. This person is out here making frosted windows, and I can't even center a div.

19 hours agohirvi74

When I was working out how web dev works, I learned the hard way that adding a div is often a good idea, and when you see most modern websites they are a huge pile of divs. If you want a centered div, add another div that is just the full size of the space to center inside, then centering within that div is relatively easy.

The same sort of goes with many visual tricks, although this one is very clean, which makes it all the more impressive.

19 hours agopclmulqdq

I appreciate you noticed the cleanness. It was a challenge to get it down to 2 divs and 2 classes. I’m disappointed that `background-attachment: fixed` isn’t supported on mobile after all these years. If it were, the full effect could easily be applied to any element with a single CSS class.

If I were to make serious use of this on a site, I’d probably opt for the non-JavaScript one class version anyways and optimize for simplicity.

18 hours agotyleo

For what it's worth, I have a site that I wrote in HTTP/CSS where I make use of blur in the top bar, and I will be switching over to this effect soon using your blog post as a guide. Thank you.

11 hours agopclmulqdq

There is no problem in CS that can't be solved by another level of indirection.

19 hours agoheisenbit

If you are into glass UI, you might enjoy this one inspired by visionOS and done with Tailwind CSS. Click on the "show windows" button in the upper left corner. Works on mobile, but definitely better on larger screens. https://www.kodingkitty.com/blog/kitty-os/#the-widget

17 hours agoKodingKitty

Nice. I struggle though to compare the differences. I would find it easier to have one demo and a dropdown which I can select which example I'm looking at.

a day agodeskr

I stopped being able to tell what was different after like the second one. All the rest looked the same to me. Might change with side-by-side.

16 hours agovundercind

This is good feedback. I considered putting each example side-by-side with the previous but it was too noisy.

I’ll think about it more this morning and see if I can come up with a UX for this that doesn’t interrupt the flow of the article as harshly.

a day agotyleo

perhaps just keep two glass tiles in each example, the old and new one.

But I'd be happy with a single one with all at the end :)

a day agoriffraff

Not quite frosted, but I went for a glass pane effect on the sidebar at https://www.gyan.dev/ffmpeg/builds/

a day agogyan

Your site is very relaxing and pretty on pc.

a day agornewme

For real, I'm not a fan of flashy colors in websites and that palette looks very calming

a day agoamarcheschi

physically wrong, as it only incorporates the pixels directly behind the glass. Noticeable with the upper few rows overexaggerating the cropped background pixels.

This results in flickering when vertically scrolling over abrupt BG color borders , eg noticeable on mobile browser Twitter UI

a day agoEduard

Yeah. I think Windows Vista didn't have that problem. I assume this can't be solved with CSS.

a day agocubefox

This can be solved with CSS. Extend the background blur all the way through the element and then use CSS masks to cut out the actual shape you want.

With this, you can remove the border (or inset box shadow), and the edge of the glass will look much, much more real

20 hours agoAndrew6rant

I tried this and it works! One unfortunate impact is a loss in simplicity. In the example on the page you can apply the non-JavaScript version to pretty much any element and get a nice glass effect with `border-radius` and such still functioning as expected.

Using `clip-path` I'm able to consider background pixels more correctly but it looks like I'd need an extra div and/or some sizing tricks to get everything working exactly as expected.

I'll keep noodling on this and may add an update to the page if a simple solution comes to mind.

16 hours agotyleo

Author here! Happy to answer any questions.

a day agotyleo

Very nice effect. I love these more creative realistic elements.

This one reminds me of this cool card effect

https://poke-holo.simey.me/

a day agodudus

Funny enough I also implemented the 3D cards in Forza Motorsport 7 :p

a day agotyleo

Clicking around found this breakdown [1]. Expertly crafted but also really cool like the optical illusion background does a lot of the heavy lifting!

---

1: https://www.joshdance.com/100/day50/

a day agoemmanueloga_

I noticed the blur only "sees" the underlying pixels directly below the glass surface. Any pixels outside that box, but within the blur radius do not get used in the gaussian filter. Probably the edge pixels are just repeated. You can see this when the light of the moon pops in to view when the edge of the rectangle starts to touch the moon. It would look more real if the light pixels from the moon start to through even when the box itself is still just over the dark area.

Would this be possible to achieve in CSS? I presume having a larger box with the blur, but clipping it to a smaller box or something like that.

a day agomkbosmans

This was discussed elsewhere in the comments:

> This can be solved with CSS. Extend the background blur all the way through the element and then use CSS masks to cut out the actual shape you want.

> With this, you can remove the border (or inset box shadow), and the edge of the glass will look much, much more real

I tried this and it works! One unfortunate impact is a loss in simplicity. In the example on the page you can apply the non-JavaScript version to pretty much any element and get a nice glass effect with `border-radius` and such still functioning as expected.

Using `clip-path` I'm able to consider background pixels more correctly but it looks like I'd need an extra div and/or some sizing tricks to get everything working exactly as expected.

I'll keep noodling on this and may add an update to the page if a simple solution comes to mind.

16 hours agotyleo

Other folks made similar comments. I’ll have to see if this is possible. Your recommendation at the bottom sounds plausible so I’ll give it a go.

a day agotyleo

Do you like that subsurface scattering is really visible with a darker background? I can clearly see the circles of varying opacity in a step-like gradient, and I can clearly see the light rays static image. Is it something with my browser or how it renders the thing? I really like how it looked more before the "More subsurface scattering" came in.

a day agogloosx

Tbh I agree the final light steps can give it more of an arcade style and feel less realistic. I think if I used this effect widely I’d make a more professional light rays image with a different opacity. I’m not entirely sure I’d use the subsurface effects or not. If I did I may reduce their intensity.

One challenge with a demo like this is that subtle effects may look better, but they are harder to see. So I balanced making them visible enough to appreciate which is more intense than I’d otherwise want.

a day agotyleo

Very cool stuff!

Little tip, you might want to add a -webkit-user-select: none on the container elements so it prevents highlighting in Safari when dragging the cursor out and back over.

a day agoprisenco

Thanks for the tip! I had `user-select: none` on the container which had the glass inside of it but apparently that didn't work. I just updated the page with `user-select: none` on the glass itself and it looks like that did the job.

a day agotyleo

I may be mistaken, but I think you have to use the -webkit- prefix for it to work in Safari.

It's working great in Chrome and Firefox though.

a day agoprisenco

Interesting, I'll add the prefix and install desktop Safari in the future.

I was testing on Safari iOS and it looks like the non-prefix version worked there.

a day agotyleo

Yeah it's basically fine in Safari because mobile requires a double tap to highlight, but only a single click on desktop. So highlighting is more intentional on mobile than desktop.

a day agoprisenco

Safari is only available on macOS

a day agoyett

You must mean something by this. What do you mean?

// Written from a browser Apple calls Safari on iOS.

a day agoTerretta

I assumed that by them saying "installing desktop Safari" they are using an OS other than macOS since Safari comes preinstalled on macOS and can't even be uninstalled at least not without disabling System Integrity Protection.

18 hours agoyett

How does the dragging itself work? custom js?

17 hours agohipadev23

In the Cross-Platform Dynamic Light section I describe `data-*` attributes (https://www.tyleo.com/html-glass.html#cross-platform-dynamic...).

The dragging works with another bit of JavaScript--the only other bit on the page--which uses a `data-click-drag-area` to define an element which will contain draggable children and a `data-click-drag-item` attribute to indicate a child can be dragged.

The the parent must be a 'positioned element' (it must have `position` set to something in CSS) the children must have `position: absolute`.

I did this in TypeScript. I'll share the code below. You have to call `initDataClickDrag` from another script... if you want to include this script directly you can just remove the `export` keyword and call `initDataClickDrag()` at the bottom after it is defined:

  export const initDataClickDrag = () => {
    // Get all of the areas we can drag items in
    const dragAreas = document.querySelectorAll("[data-click-drag-area]");
    for (const dragArea of dragAreas) {
      // Only iterate `HTMLElement`s
      if (!(dragArea instanceof HTMLElement)) continue;

      // Get all of the items we can drag
      const dragItems = dragArea.querySelectorAll("[data-click-drag-item]");
      for (const dragItem of dragItems) {
        // Only iterate `HTMLElement`s
        if (!(dragItem instanceof HTMLElement)) continue;

        let isDragging = false;
        let lastCursorX: number | undefined = undefined;
        let lastCursorY: number | undefined = undefined;

        // Mouse down event to start dragging
        const downCallback = (obj: {
          readonly pageX: number;
          readonly pageY: number;
        }) => {
          isDragging = true;
          lastCursorX = obj.pageX;
          lastCursorY = obj.pageY;
        };

        dragItem.addEventListener("mousedown", (e) => {
          downCallback(e);
        });

        dragItem.addEventListener("touchstart", (e) => {
          const touches = e.touches;
          if (touches.length === 0) return;
          downCallback(touches[0]);
        });

        // Mouse move event to scroll while dragging
        const moveCallback = (obj: {
          readonly pageX: number;
          readonly pageY: number;
        }): boolean => {
          if (!isDragging) return false;

          if (lastCursorX === undefined) return false;
          if (lastCursorY === undefined) return false;

          const x = lastCursorX - obj.pageX;
          const y = lastCursorY - obj.pageY;

          const left = dragItem.offsetLeft - x;
          const top = dragItem.offsetTop - y;

          dragItem.style.left = `${left.toString()}px`;
          dragItem.style.top = `${top.toString()}px`;

          // Get dragArea dimensions
          const dragAreaRect = dragArea.getBoundingClientRect();

          // Get element dimensions
          const elementRect = dragItem.getBoundingClientRect();

          if (dragItem.offsetLeft < 0) dragItem.style.left = "0px";
          if (dragItem.offsetTop < 0) dragItem.style.top = "0px";

          if (left + elementRect.width > dragAreaRect.width) {
            // Right boundary
            const left = dragAreaRect.width - elementRect.width;
            dragItem.style.left = `${left.toString()}px`;
          }

          if (top + elementRect.height > dragAreaRect.height) {
            // Bottom boundary
            const top = dragAreaRect.height - elementRect.height;
            dragItem.style.top = `${top.toString()}px`;
          }

          lastCursorX = obj.pageX;
          lastCursorY = obj.pageY;

          return true;
        };

        document.addEventListener("mousemove", (e) => {
          moveCallback(e);
        });

        document.addEventListener(
          "touchmove",
          (e) => {
            const touches = e.touches;
            if (touches.length === 0) return;
            if (!moveCallback(touches[0])) return;
            e.preventDefault();
          },
          { passive: false },
        );

        // Mouse up event to stop dragging
        document.addEventListener("mouseup", () => {
          isDragging = false;
        });

        document.addEventListener("touchend", () => {
          isDragging = false;
        });
      }
    }
  };
13 hours agotyleo

Over 20 years ago Eric Meyer had a glass background done with CSS1 with a bit more texture but way less code https://meyerweb.com/eric/css/edge/complexspiral/glassy.html If for nothing else those old things on css/edge could at least be used as inspiration.

16 hours agonedt

An interesting tradeoff here is that this is less CPU processing but more content to download. The effect is achieved by imposing a blurred image on top of a non-blurred one. Here is the blurred image used in that demo: https://meyerweb.com/eric/css/edge/complexspiral/glassy-ripp...

In many cases this can be the right tradeoff to make. There is also a beauty to its simplicity.

16 hours agotyleo

Back then it was just small enough for my ISDN modem. And there are even more images for the headers. But it's also nicely falling back to plain colored backgrounds so the issue of loading the images is not so big. Now I haven't fully checked the frosted glass demo, but shouldn't that even just have impact on the GPU? filter can be accelerated.

14 hours agonedt

Yeah, the filters can be accelerated in most browsers but I believe that is still more battery cost than not having it.

14 hours agotyleo

I'm pretty sure there is no subsurface scattering in frosted glass. Indeed frosted glass is just regular glass with a frosted surface. Below the surface, it's just pure glass, which does not scatter light.

a day agoEcco

Tbh it’s more like an edge-lit acrylic effect. We did call it a “subsurface effect” when I worked at Forza but may have been using the term loosely. Forza also was going for acrylic rather than glass and we had several acrylic samples around the office to play with.

That being said my example is not acrylic… and it’s not quite glass either as you mention. It’s more like glass with some artistic license.

a day agotyleo

I agree that it looks nice and similar to what SSS eventually renders like. But strictly speaking it's not SSS. Actual SSS happens when the medium is somewhat transparent (light can enter the material) but also diffusive (light gets randomly bounced around within the material). Glass lacks this second property. A good example of a material exhibiting strong SSS would be wax.

Not that this matters much anyway, the effect is cool nonetheless, albeit a little bit improperly named.

a day agoEcco

If both sides are frosted, then you will have a similar effect as subsurface scattering.

a day agosmaddox

I was grappling with background-attachment: fixed in iOS devices just last week… caniuse.net indicated that it’s supported now, but that is very much still not the case.

I think I’ve finally cracked why it’s not supported. The official line is that it’s “too expensive” on the cpu, but that doesn’t hold water when single-core performance of iPhones regularly outpaces Macs.

iOS Safari does one extra “abstraction” phase of an entire web page that allows for instant pinching and zooming of web pages. In order to get background-attachment: fixed working under such a paradigm, you would need to not only calculate where the background image is relative to the viewport, but also the size and placement of a zoomed document in real time. And on top of that, the browser designers would need to make some kind of decision on what a coherent implementation of the feature would even do under such circumstances.

I wish that iOS had a way to just turn off this extra abstraction in CSS altogether. It was a fine crib before responsive design was a thing, but for some pages, it just causes problems now. It’s not needed “everywhere” on the web any more than it’s necessary in all iOS apps.

19 hours agoEric_WVGG

I use pinch-zoom all the time I the time, even on mobile-targeted pages. I’d be irritated with Apple if they let web developers disable that basic feature of my browser.

16 hours agovundercind

Too bad you're still forced to use Safari on iOS (in the US) even if you install Chrome or Firefox.

19 hours agoleptons

I wouldn't trust Chrome to treat my iPhone battery well, unfortunately. Looking forward to hearing reviews of real Firefox on iOS from Eurozone, though… I'd consider downloading the source and compiling with a dev cert if it turns out to be any good.

16 hours agoEric_WVGG

Yeah, the abstraction already breaks other common HTML techniques anyways like floating navigation bars.

16 hours agotyleo

The frosted glass effect was a huge eye-opener for me the first I saw it in the early 2000s. At the time I thought PNG-based translucency was the be-all-end-all of GUI design and nothing would ever trump it. I genuinely believed there was no UI effect you could not create by simply adding an alpha channel and allowing for translucency and layers overlaying on top of each other. Drop-shadows, glass effects, all of it can be fully done by PNGs that overlay on top of each other. Then I saw a frosted glass / background effect and I was awestruck, here was a UI effect that wasn't simply overlaying pixels over pixels.... I'll never forget that moment

spoiler: and so, I left frontend :D

17 hours agosam0x17

I haven't done HTML or CSS since HTML4 and CSS2 were cutting edge. This kind of stuff blows my mind now. Very cool, too bad frontend is arcanine magic these days because I don't have time to learn all this anymore.

a day agovoidfunc

I feel like 90% of what he did was barely noticable. I did the core frosted glass look back in around 2012. I still think it is great and the contrast is fine if you do it right.

a day agoilaksh

This is really cool and thanks for sharing the details. Now I want to see coke bottle glass effects. :-) https://www.nathanallan.com/product/bottle-base-textured-gla...

21 hours agogbolcer

That's really interesting. I wonder if you could achieve it by a combination of the glassmorphism effects on my page with some neuromorphism effects: https://www.justinmind.com/ui-design/neumorphism

With a quick google search, it looks like you can find some which mimic the 'coke bottle bottom' shape with shadow and light.

21 hours agotyleo

Many game UIs are built using HTML/CSS under the hood, so it has come full circle!

https://coherent-labs.com/products/coherent-gameface/

a day agoscq

[OneJS](https://onejs.com/) for Unity is a more modern entry in this space (Disclaimer: I'm the creator). Instead of embedding a browser/webview, it lets you use web techs and directly integrates with UI Toolkit (Unity's native UI solution).

A general drawback of using web UIs in games is the lack of support for performant 2D vfx. It's something I'd like to tackle in 2025 with [Spark2D](https://github.com/Singtaa/Spark2D).

a day agoCreepGin

This is intriguing.

Why not embedding a browser directly though?

How do you feel about the Flutter based UI Widgets? (https://github.com/UIWidgets/com.unity.uiwidgets)

In my experience there are two kinds of games: UI is the game or UI supports the game. When UI is the game, the performance bottleneck is almost always text rendering. From a DX POV there are many solutions, but from a performance POV, how can anyone compete with Skia or the native text APIs?

a day agodoctorpangloss

I feel that no one can compete with Skia in terms of feature-richness. But performance, it'll really depend on the environment, hardware, and what you are doing.

Unity's UI Toolkit, for example, is mesh and shader-based, so it can be highly performant for complex UIs.

Interesting note: If you use OneJS with the NodeJS backend, you can use the wasm version of Skia and render all kinds of cool stuff onto a texture in Unity. Obviously the performance with that is not ideal for animation. But I was able to quickly and easily render music notations and charts on my deployed Unity app on Android using that method.

> Why not embedding a browser directly though? How do you feel about the Flutter based UI Widgets?

Mostly to avoid overheads on both disk space and runtime performance. QuickJS is only ~20MB to embed. And I have no experience with Flutter, unfortunately. =/

a day agoCreepGin

I was part of an investigation which considered Coherent products for one of the Forza titles. We didn't go with it at the time, and I haven't had the opportunity to use it on a title at other companies, but it looks rad.

Forza used a custom UI system based on XAML and the acrylic elements at the top of the article were largely implemented in a custom shader. For a custom system it was pretty solid. We also built a lot of tech into it related to 3D placement of elements to support diagetic UI.

a day agotyleo

Was this the in-house c++ WPF implementation?

a day agomsarchet

Yes it was! I actually loved that thing.

a day agotyleo

Lots of games used to use [Scaleform](https://en.m.wikipedia.org/wiki/Scaleform_GFx) for their UI.

a day agoOptionOfT

Lots of games still use it but its been discontinued by Autodesk and is on its way out. Not to mention Flash also being discontinued.

a day agotyleo

Great inspiration, thanks. Been working on something that has gamified elements, and the pin has dropped that this style might be perfect

a day agoflavaz

I always feel rendering such blurred panes takes quite a performance hit. Do we have any numbers on this?

I might just be old - when this was done on the CPU.

a day agotimvisee

Looks awesome!!

On a semi-related note, the best in-game UI I’ve ever seen was in Prey 2017. The little computers and terminals you interact with look amazing, and every single I time I used one I was spellbound. The huge amount of effort that goes into small details in games in particular is incredible.

a day agochamomeal

Do browsers really use a box filter to approximate a gaussian blur? That seems implausible to me, as they produce pretty different looking blurs.

a day agomkbosmans

It doesn't seem improbable considering it's a huge performance win and perhaps many won't notice?

a day agodjmips

It is the performance win for similar looking results that I find improbable. For a box blur to look like gaussian blur, you would need multiple passes. Even though each pass is now O(1) instead of O(n) (with n the blur radius), due to caching effects I think a gaussian kernel would still be faster, especially for the small blur radius as described in the article.

a day agomkbosmans

The multi-pass box blur effect is the one I've seen in game engines.

16 hours agotyleo

Yeah because the GPU has special hardware which you can take advantage of for an optimized box filter.

https://www.rastergrid.com/blog/2010/09/efficient-gaussian-b...

a day agotyleo

That link is not a box filter, as it still uses weights to approximate a gaussian convolution kernel. It just uses some special hardware to do less texture fetches. But that is a constant 2x improvement over the full 1D convolution, not the box filter O(1) approach that the article suggests that browsers are using.

a day agomkbosmans

You've moved me a place of uncertainty here. I had some confirmation that _some_ browsers use box blurs for this effect, I _know_ some game engines use multiple box blurs to approximate Gaussian blur (having seen the code myself).

I updated a few sentences in the article to reflect that uncertainty. Thanks!

16 hours agotyleo

This is beautiful, and the thing that most surprised me is that it didn't fire up my fan and roast my machine. I'm used to browser visuals like these just grinding away - but this was very smooth and performant. Thanks for sharing!

20 hours agobinarymax

Thanks, I tried to minimize the amount of code to the bare necessities. It actually pained me that JavaScript was required to get a cross-platform version working.

Aside from that I haven’t jammed ads or trackers into every nook and cranny of my site which helps a lot with perf.

11 hours agotyleo

I personally like this aesthetic and think it looks great. I also don't think the performance questions raised in this thread are material given modern user device capabilities.

However, the contrast between the glass background and foreground is dependent on background content by design, which is a serious issue for complying with various accessibility guidelines. For enterprise apps if you want to pass the various accessibility reviews it needs to be a user preference to disable this at least, or just don't use this technique to guarantee pass for contrast related questions.

18 hours agokfarr

Just so you know...

I'm not seeing the "background-attachment fixed" working at all. Not the CSS and neither the JavaScript solution. The rays stay static, detached from the moving div just as they were before applying that code.

In both Firefox and Vivaldi, on Windows.

a day agogenezeta

Very interesting. I tested primarily on Firefox Windows and iOS and I don’t see similar complaints by others so I suspect it is working widely… I wonder if you have an extension which could be disabling the JavaScript?

It is especially weird that “background-attachment: fixed” wouldn’t work on desktop though since that is old old technology.

Are you sure you are seeing the light ray texture at all? If you aren’t sure, you could try boosting your screen brightness. It’s a very subtle effect.

Otherwise, it would be useful to know if your console shows any error messages on the page.

17 minutes agotyleo

Nice effect, nice article too

a day agolovegrenoble

Beautiful. And excellent writing!

a day agomichelreij

Aw. Kind of like Windows 7 Aero theme.

a day agoAnimats

I have this almost everywhere on my KDE Plasma desktop, with the help of Kvantum* (their debian dependencies seem to be broken - I had to build it from source in a container). Maybe it won't last for long, but I absolutely love the aesthetics.

* https://github.com/tsujan/Kvantum