163

DirectX Adopting SPIR-V as the Interchange Format of the Future

No surprise here, given the extent HLSL is already the de facto shading language for Vulkan.

Khronos already mentioned in a couple of conferences that there will be no further work improving GLSL, and given DirectX weight in the industry, HLSL kind of took over.

Additionally for the NVidia fans, it might be that Slang also gets a place in the Vulkan ecosystem, discussions are ongoing, as revealed on SIGGRAPH sessions.

14 hours agopjmlp

What about WGSL though, the shader language of WebGPU? WebGPU is kind of Vulkan lite, but unlike with Vulkan, Apple is on board and actually the reason why WGSL exists as yet another shading language.

13 hours agoSimran-B

What about it? Nobody wanted WGSL, it's just an artifact of having to appease Apple during WebGPUs development as you say. I don't see why it would be adopted for anything else.

The old WebGPU meeting notes have some choice quotes from (IIRC) Unity and Adobe engineers literally begging the committee not to invent a new shader language.

13 hours agojsheard

>The old WebGPU meeting notes have some choice quotes from (IIRC) Unity and Adobe engineers literally begging the committee not to invent a new shader language.

This was an interesting tidbit, so I tried to find the source for it. While I did not find it, I did find the December 2019 minutes[0] which has a related point:

>Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that.

I found this link via rust hn[1] which I found after reading this blog post:[2]

>Vulkan used a bytecode, called SPIR-V, so you could target it from any shader language you wanted. WebGPU was going to use SPIR-V, but then Apple said no

The lobsters thread also links to a relevant HN post:[3]

>I know, I was there. I also think that objection to SPIR-V wasn't completely unfounded. SPIR-V is a nice binary representation of shaders, but it has problems in the context of WebGPU adoption: It's so low level [...] It has a lot of instructions [...] Friction in the features we need, vs features Khronos needs. [...] there is no single well specified and tested textual shading language. HLSL doesn't have a spec.

The linked blog post from lobsters was also discussed on HN, which you also commented in.[4]

It would be great if you could find that Unity/Adobe discussion as I would be interested to read it.

[0] https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...

[1] https://lobste.rs/s/q4ment/i_want_talk_about_webgpu

[2] https://cohost.org/mcc/post/1406157-i-want-to-talk-about-web...

[3] https://news.ycombinator.com/item?id=23089745

[4] https://news.ycombinator.com/item?id=35800988

6 hours agofngjdflmdflg

> It would be great if you could find that Unity/Adobe discussion as I would be interested to read it.

https://github.com/gpuweb/gpuweb/wiki/Minutes-2019-09-24

Corentin: Web Shading Language — A high-level shading language made by Apple for WebGPU.

<room in general grimaces>

[...]

Jesse B (Unity): We care about HLSL

Eric B (Adobe): Creating a new high level language is a cardinal sin. Don’t. Do. That. Don’t want to rewrite all my shaders AGAIN.

Jesse B: If we can transcode to HLSL to whatever you need, great. If we can’t, we may not support your platform at all.

Eric B: Would really not like even to write another transcoder. If there’s an existing tool to get to an intermediate representation, that’s good. Would suggest SPIRV is an EXCELLENT existing intermediate representation.

Note the WSL language made by Apple which sparked that discussion is unrelated to the WGSL language they ended up shipping, but the sentiment that the ISV representatives just wanted them to use HLSL or SPIR-V stands.

an hour agojsheard

WebGPU, like WebGL, is a decade behind the native APIs it is based on.

No one asked for a new Rust like shading language that they have to rewrite their shaders on.

Also contrary to FOSS circles, most studios don't really care about Web 3D, hence why streaming is such a thing for them.

There have been HLSL to SPIR-V compilers for several years now, this is Microsoft own official compiler getting SPIR-V backend as well.

13 hours agopjmlp

Because WebGL, just like WebAssembly (with its hacky thread support and compilation issues) is a giant kludge.

WebGL still has fundamental issues of not even supporting anything resembling a modern OpenGL feature set (with modern meaning 2010s era stuff like compute shaders and multi draw indirect) in theory, and in practice, macOS doesn't support WebGL2, meaning stuff like multiple render targets (which is necessary for deferred rendering), so it's almost impossible to make a modernish game that runs in a browser well.

Imo the problem isn't that WebGPU/Wasm is a decade/X years behind, but that we cannot reliably expect a feature set that existed on typical mid 2000s PCs to reliably work in the browser across all platforms (which is the whole point of the web).

3 hours agotorginus

It's almost as like some Fruit based company is sabotaging the efforts to keep its walled garden.

2 hours agoYgg2

[flagged]

12 hours agodummy-upstart

The native WebGPU libraries accept SPIRV as input, and they offer libraries to convert WGSL to SPIRV and back. E.g. WGSL is only needed when running WebGPU in browsers, but even there it can be code-generated from other shading languages by going through SPIRV (but tbh, I actually like WGSL, it's simple and straightforward).

4 hours agoflohofwoe

Except that the conversion to WGSL is a complete waste of compute resources, engineering effort and the time of everyone involved. WebGPU is a _web_ API after all, even if people realized the runtimes could be used outside the browser.

Converting your SPIR-V to WGSL just to convert it back to SPIR-V to feed it into a Vulkan driver, or running an entire language frontend just to emit DXIL or Metal IR. We learned 15 years ago that textual shader languages at the GPU API interface are a mistake but we're forced to relearn the same mistakes because Apple wouldn't play ball. What a joke.

3 hours agoMindSpunk

WGSL could be good for Khronos. It’s a modern language with an actual specification. It’s gaining users every day.

6 hours agokvark

WGSL was a mistake and hopefully they get rid of it, it negatively impacts WebGPU's adoption, at least it did for me, the syntax is one of the worst ever created, just horrible

5 hours agoWhereIsTheTruth

My understanding was that dxc lacked support for compiling various HLSL features to SPIR-V (hence SM7 now), so there are still a bunch of Vulkan-focused projects like Godot which only support GLSL.

But yes, the games industry has been almost entirely HLSL since forever, and this is going to help remove the final obstacles.

14 hours agoTillE

Yep, especially DXC HLSL to SPIRV was a big issue when it came to supporting new features from Vulkan.

Though I would still like to see if slang can succeed and I am always a bit afraid of Microsoft just dropping the ball somewhere.

9 hours agominraws

> Khronos already mentioned in a couple of conferences that there will be no further work improving GLSL

Unfortunately, HLSL isn’t an open standard like GLSL. Is it Khronos's intention to focus solely on SPIR-V moving forward, leaving the choice of higher-level shader languages up to application developers?

9 hours agohgs3

There's likely to be very little funding for GLSL moving forward, and I would expect no major spec updates ever again, but vendors will probably keep publishing extensions for new GPU features and fixing things up. GLSL still has a fairly large user base. Whether SPIR-V is going to be the only Khronos shading language (or whatever you want to call it) moving forward, that's hard to say. Nvidia is pushing for Slang as a Khronos standard at the moment. Not sure if anyone's biting.

8 hours agoferbivore

Yes, they officially stated at Vulkanised, SIGGRAPH among other places, that there is no budget for GLSL improvements, and also they aren't programming language experts anyway.

It is up to the community to come up with alternative, and the game development community is mostly HLSL.

5 hours agopjmlp

Will this help games be more compatible with the proton layer on Linux or is this not related?

14 hours agogigatexal

In theory if DirectX games start passing shaders to the driver in SPIR-V, the same format Vulkan uses, then yes it should make Protons job easier. Translating the current DXIL format to SPIR-V is apparently non-trivial to say the least:

https://themaister.net/blog/2021/09/05/my-personal-hell-of-t...

https://themaister.net/blog/2021/10/03/my-personal-hell-of-t...

https://themaister.net/blog/2021/11/07/my-personal-hell-of-t...

https://themaister.net/blog/2022/04/11/my-personal-hell-of-t...

https://themaister.net/blog/2022/04/24/my-personal-hell-of-t...

14 hours agojsheard

Maybe. Maybe not; it could well be an incompatible flavour of SPIR-V.

10 hours agotrelane

It's unlikely to diverge from the same general flavor as vulkan. The worst parts of the DXIL to SPIR-V conversion I remember from that chain of blog posts is rebuilding structured control flow and how it interacts with atomics and wave convergence.

That's a problem that goes away irrespective of any DX extensions to SPIR-V for supporting the binding model DX uses.

8 hours agoMindSpunk

I haven't used either in a while, what is missing from GLSL?

13 hours agocamel-cdr

C based, no support for modular programming, everything needs to be a giant include, no one is adding features to it as Khronos isn't assigned any budget to it.

HLSL has evolved to be C++ like, including lightweight templates, mesh shaders and work graphs, has module support via libraries, is continuously being improved on each DirectX release.

13 hours agopjmlp

I'm not a fan of GLSL either, but adding C++ like baggage to shading languages like HLSL and especially MSL do (which is C++) is a massive mistake IMHO, I'd prefer WGSL over that sort of pointless language complexity any day.

4 hours agoflohofwoe

I'd wish more Microsoft devblog content was like this one.

5 hours agoriedel

I could do without the shitty memes images.

4 hours agoflohofwoe

Hopefully this isn’t actually Third SPIR-V Dialect

13 hours agobinary132

I wouldn't expect being able to load a D3D12 SPIRV blob into Vulkan or OpenGL anyway though, just because the 'input semantics' are very different (and I think that's also the main difference between GL and Vulkan SPIRV blobs). But AFAIK SPIRV is extensible for this type of differences without rendering existing SPIRV tools completely useless.

2 hours agoflohofwoe

This is really good news!

14 hours agotester756

Cinematic crossovers have gone too far

12 hours agoomershapira

Good. Now if Windows would adopt Vulkan as the graphics API of the future.

13 hours agobobajeff

It's mostly on us, the developers. Vulkan is fully supported on windows.

I would say that if you want to have multi-platform support just use Vulkan. Covers most of the platforms(especially if you include MoltenVK)[0].

Though, for games, if you want to support Xbox, that usually throws a curveball into API choice planning. As that might be more important of a target than Linux/Android/Mac/iOS(maybe even combined) for your game. So if you already have to support DX for that..

[0] https://www.vulkan.org/porting

3 hours agominiupuchaty

What's wrong with d3d12? It works perfectly fine for what it does. In my experience it causes a lot less issues than Vulkan. And it's not really due to windows not supporting Vulkan correctly, since my experience with Vulkan has mostly been on Linux.

I don't dislike Vulkan either, it's just that I don't see the point of replacing something that works pretty well.

13 hours agomardifoufs

Adopting Vulkan doesn't mean removing Direct X 12. Just like adopting spirv doesn't mean removing hlsl. No one said anything about getting rid of anything.

13 hours agobobajeff

SPIR-V is not an alternative to HLSL. It's an intermediary format that you compile HLSL (or GLSL) to.

11 hours agoNarishma

Reinvention of the wheel and tax on supporting "yet another thing" for developers who need to deal with it.

Same reason standards have some value.

12 hours agoshmerl

I don't think it's reinventing the wheel, since Vulkan was ready quite a bit after d3d12 but yeah I guess maybe it could be the standard on windows after d3d12 becomes obsolete...

But that's going to be in quite a while since I can't think of an actual feature (for end users) that is missing from one vs the other right now.

Everything on Windows already uses d3d12/DirectX basically so it would actually be a huge wheel reinvention to migrate to a standard just for the sake of it.

8 hours agomardifoufs

I think saying that DX was first so it's Vulkan that was reinventing the wheel is incorrect with historical context.

AMD and DICE developed a prototype API called Mantle. Which is what both DX and Vulkan are based on.

Both Vulkan(glNext back then) and DX12 were announced around the same time. VK came a bit later as standards are usually slower in coming to decisions but it's not like VK was reinventing anything from DX.

I remember we were having a laugh reading early DX12 documentation as it was in parts just copied from Mantle with names unchanged in places!

3 hours agominiupuchaty

DX12 was pushed as NIH, since it was made from Mantle same way as Vulkan was. So to reduce NIH, it only makes sense to unify it all in Vulkan.

They already made the first sensible step with SPIR-V here. The next step makes the same sense.

And stuff can be translated into Vulkan if it can't be rewritten.

5 hours agoshmerl

It's Vulkan that was reinventing the DX12 wheel wasn't it though?

12 hours agoizacus

Vulkan is based on Mantle, which predates the release of DX12 by about 2 years.

12 hours agoHideousKojima

The same can also be said about D3D12, it is at least 'heavily inspired' by Mantle. In the end, not much of Mantle has survived in Vulkan either though. Mantle was a much cleaner API than Vulkan because it didn't have to cover so many GPU architectures as Vulkan (Mantle especially didn't have to care about supporting shitty mobile GPUs).

3 hours agoflohofwoe

In this case Vulkan is the only option. DX12 is a non starter since it was never intended to be universally available.

10 hours agoshmerl

DX12 is proprietary. Vulkan is not.

11 hours agoekianjo

vulkan is already supported on windows as a first-class citizen by all major IHVs. I am not sure what this "adoption" you speak would entail. If you're talking about replacing d3d12, that actually is a terrible idea.

13 hours agonicebyte

That's not really the same as being supported by Windows. I think that's 3rd party support and not built into the OS.

13 hours agobobajeff

what do you mean when you say "built into the os"? d3d12 is just an api. the d3d runtime is user-space, both the UMD that wraps it and the KMD are supplied by the hardware vendor. In the end, both a d3d app and a vulkan app end up talking to the very same KMD. See here for reference:

https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

13 hours agonicebyte

D3D is clearly more integrated into the OS than Vulkan is.

Most importantly, Windows includes a software D3D renderer (WARP) so apps can depend on it always being present (even if the performance isn’t spectacular). There are lots of situations where Vulkan isn’t present on Windows, for example a Remote Desktop/terminal server session, or machines with old/low-end video cards. These might not be important for AAA games, but for normal applications they are.

Another example: Windows doesn’t include the Vulkan loader (vulkan-1.dll), apps need to bundle/install that.

11 hours agomrpippy

> D3D is clearly more integrated into the OS than Vulkan is.

sure, but addressing the two points that you brought up would not entail changing windows _the operating system_, just the stuff that ships with it. you could easily ship swift shader along with warp and the loader library, both of those are just some application libraries as far as the os/kernel is concerned. of course now we're in the territory of arguing about "what constitutes an OS" :-)

10 hours agonicebyte

Oh, I was under the impression that Direct X 12 was built-in for Windows like Metal is on Apple.

13 hours agobobajeff

Does that support extend to ARM? Not sure if it's still the case, but I recall that early Windows on ARM devices didn't have native Vulkan (and I believe OpenGL was translated to DirectX via ANGLE).

13 hours agojcotton42

I haven't laid my hands on any ARM windows devices so I wouldn't be able to tell you. I'd be somewhat surprised if the newer snapdragon stuff doesn't have vulkan support because qcom supports vulkan first-class on its gpus. in fact, on newer android devices OpenGL support might already be implemented on top of vulkan, but don't quote me on that.

12 hours agonicebyte

If you're talking about replacing d3d12, that actually is a terrible idea.

Why do you say that?

13 hours agojimbob45

I say this because vulkan is hamstrung by being an "open API" intended to run on a very wide range of devices including mobiles. this has major repercussions, like the awkward descriptor set binding model (whereas d3d12's descriptor heaps are both easier to deal with and map better to the actual hardware that d3d12 is intended to run on, see e.g. https://www.gfxstrand.net/faith/blog/2022/08/descriptors-are...). overall d3d has the benefit of a narrower scope.

Another problem with being an open API is that (and this is my own speculation) it's easier for IHVs to collaborate with just Microsoft to move faster and hammer out the APIs for upcoming novel features like work graphs for example, vs bringing it into the public working group and "showing their cards" so to speak. This is probably why vk gets all new shiny stuff like rtrt, mesh shaders etc. only after it has been in d3d for a while.

One could argue this is all solvable by "just" adding a torrent of extensions to vulkan but it's really not clear to me what that path offers vs d3d.

12 hours agonicebyte

I would guess that if DX didn't exist the iteration on VK side would just be faster. Through extensions, like you've mentioned.

In the end it might have even speed up the adoption of such features. Currently if you have a multiplatform engine, even though windows is like 99% of your PC player base it's still sometimes a tough decision to just use a feature that you can't support on all your targets.

3 hours agominiupuchaty

The downside is that it ties them incredibly heavily to Microsoft, and makes cross-platform efforts much harder.

10 hours agotrelane

D3D11 and D3D12 are objectively better designed APIs than their 'Khronos counterparts' OpenGL and Vulkan, as is Metal on iOS and macOS.

4 hours agoflohofwoe

While OpenGl vs D3D11 I would agree I don't find D3D12 vs Vulkan difference to be that big.

What are the parts that you consider objectively better in D3D12 compared to Vulkan?

3 hours agominiupuchaty

Why?

Vulkan is not a well designed API. It's so complicated, verbose, and error prone. It's pretty bad.

10 hours agoforrestthewoods

But are you saying that compared to DX or just in general?

We're talking here about potential DX replacement, not about design in general and the bulk of it is very similar for both APIs.

There are some small quirks from Vulkan being made to be easily extensible which in the end I consider worth it.

I personally like how consistent the API is in both patterns and naming. After using it for a while, it's easy to infer what function will do from the name, how it will handle memory, and what you'll need to do with that object after the fact.

I find documentation better than the DX one.

What are your biggest problems with it?

4 hours agominiupuchaty

At least it's documented.

5 hours agogiomasce

The DirectX specs are much better than both the OpenGL and Vulkan specs because they also go into implementation details and are written in 'documentation language', not 'spec language':

https://microsoft.github.io/DirectX-Specs/

3 hours agoflohofwoe

I don't think that going into implementation details is what I would expect from an interface specification. The interface exists precisely to isolate the API consumer from the implementation details.

And while they're much better than nothing, those documents are certainly not a specification. They're are individual documents each covering a part of the API, with very spotty coverage (mostly focusing on new features) and unclear relationship to one another.

For example, the precise semantics of ResourceBarrier() are nowhere to be found. You can infer something from the extended barrier documentation, something is written in the function MSDN page (with vague references to concepts like "promoting" and "decaying"), something else is written in other random MSDN pages (which you only discover by browsing around, there are no specific links) but at the end of the day you're left to guess the actual assumptions you can make.

*EDIT* I don't mean to say that Vulkan or SPIR-V specification is perfect either. One still has a lot of doubts while reading them. But at least there is an attempt of writing a document that specifies the entire contract that exists between the API implementer and the API consumer. Missing points are in general considered bugs and sometimes fixed.

3 hours agogiomasce

> I don't think that going into implementation details is what I would expect from an interface specification.

I guess that's why Microsoft calls it an "engineering spec", but I prefer that sort specification over the Vulkan or GL spec TBH.

> The interface exists precisely to isolate the API consumer from the implementation details.

In theory that's a good thing, but at least the GL spec was quite useless because concrete drivers still interpreted the specification differently - or were just plain buggy.

Writing GL code precisely against the spec didn't help with making that GL code run on specific drivers at all, and Khronos only worried about their spec, not about the quality of vendor drivers (while some GPU vendors didn't worry much about the quality of their GL drivers either).

The D3D engineering specs seem to be grounded much more in the real world, and the additional information that goes beyond the interface description is extremely helpful (source access would be better of course).

2 hours agoflohofwoe

If you search for 'D3D12' spec what you actually find is D3D12 doesn't have a specification at all. D3D12's "spec" is only specified by a document that states the differences from D3D11. There's no complete holistic document that describes D3D12 entirely in terms of D3D12. You have to cross reference back and forth between the two documents and try and make sense of it.

Many of D3D12's newer features (Enhanced Barriers, which are largely a clone of Vulkan's pipeline barriers) are woefully under specified, with no real description of the precise semantics. Just finding if a function is safe to call in multiple threads simultaneously is quite difficult.

3 hours agoMindSpunk

It should.

12 hours agoshmerl

[flagged]

15 hours agor1chardnl

Step 1: Microsoft has a proprietary alternative to an open standard, people complain.

Step 2: Microsoft begins adopting the open standard, people complain.

14 hours agojsheard

I think they're referring to https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

14 hours agomajorchord

I know that's what they're referring to. If you're concerned about Microsoft gaining undue influence over Vulkan/SPIR-V then rest assured they already effectively controlled the desktop graphics landscape, however they define DirectX becomes the template for hardware vendors to follow, and Vulkan then has to follow their lead.

The pattern is especially obvious with big new features like raytracing, which was added to DirectX first and then some time later added to Vulkan with an API which almost exactly mirrors how DirectX abstracts it. There are even Vulkan extensions which exist specifically to make emulating DirectX semantics easier.

14 hours agojsheard

That's understandable. Control over standards has the immense value. Just like look at Nvidia's CUDA.

14 hours agochucke1992

CUDA success has much to thank Intel and AMD for never providing anything with OpenCL that could be a proper alternative in developer experience, graphical debugging, libraries and stable drivers.

Even OpenCL 2.x C++ standard was largely ignored or badly supported by their toolchains.

14 hours agopjmlp

Isn't the point of OpenCL to be... open? Not only did Intel and AMD not provide enough value, but neither did the community.

CUDA... is kind of annoying. And yet, it's the best experience (for GPGPU), as far as I can tell.

I feel like it says something that CUDA sets a standard for GPGPU (i.e. its visible runtime API) but others still fail to catch up.

14 hours agowinwang

The problem is the OpenCL development model is just garbage.

Compare the hello world of OpenCL [1] vs CUDA [2]. So much boilerplate and low level complexity for doing OpenCL whereas the CUDA example is just a few simple lines using the cuda compiler.

And what really sucks is it's pretty hard to get away from that complexity the way OpenCL is structured. You simply have to know WAY too much about the hardware of the machine you are running on, which means having the intel/amd/nvidia routes in your application logic when trying to make an OpenCL app.

Meanwhile, CUDA, because it's unapologetically just for nVidia cards, completely does away with that complexity in the happy path.

For something to be competitive with CUDA, the standard needs something like a platform agnostic bytecode to target so a common accelerated platform can scoop up the bytecode and run it on a given platform.

[1] https://github.com/intel/compute-samples/blob/master/compute...

[2] https://github.com/premprakashp/cuda-hello-world

12 hours agocogman10

Yeah, not just OpenCL, but even "newer" standards like WebGPU. I considered making a blog post where I just put the two hello worlds side-by-side and say nothing else.

I was severely disappointed after seeing people praise WebGPU (I believe for being better than OpenGL).

As for the platform-agnostic bytecode, that's where something like MLIR would work too (kind of). But we could also simply just start with transpiling that bytecode into CUDA/PTX.

Better UX with wider platform compatibility: CuPy, Triton.

10 hours agowinwang

OpenCL 2.x was a major failure across the board.

OpenGL and Vulkan were good though. Gotta take the wins where they exist.

14 hours agodragontamer

Thanks to Intel and AMD.

13 hours agopjmlp

NVidia never even implemented OpenCL 2.0

AMD had a buggy version. Intel had no dGPIs so no one cared how well an iGPU ran OpenCL (be it 1.3 or 2.0)

--------

AMD was clearly pushing C++ AMP at the time with Microsoft. And IMO, it worked great!! Alas, no one uses it so that died.

8 hours agodragontamer

Don't blame NVidia for Intel and AMD failures to support OpenCL.

5 hours agopjmlp

cough cough

Remind me who owns the OpenCL trademark, again?

Intel and AMD weren't the ones that abandoned it. Speaking in no uncertain terms, there was a sole stakeholder that can be held responsible for letting the project die and preventing the proliferation of Open GPGPU standards. A company that has everything to gain from killing Open standards in the cradle and replacing them with proprietary alternatives. Someone with a well-known grudge against Khronos who's willing to throw an oversized wrench into the plans as long as it hurts their opponents.

14 hours agotalldayo

Don't blame Apple for what Khronos, Intel and AMD have done with OpenCL after version 1.0.

It isn't Apple's fault that Intel and AMD didn't deliver.

13 hours agopjmlp

It is entirely Apple's fault that they rejected OpenCL to replace it with a proprietary library. If this was an implementation or specification problem, Apple had every opportunity to shape the project in their own image. They cannot possibly argue that this was done for any other reason than greed, considering they themselves laid the framework for such a project. Without Apple's cooperation, Open Source GPGPU libraries can not reasonably target every client. Apple knows they wield this power, and considering their history it's both illogical and willfully ignorant to assume they're not doing this as part of a broader trend of monopolistic abuse.

Having shut out Nvidia as part of a petty feud, Apple realized they could force any inferior or nonfree CUDA alternative onto their developers no matter how unfinished, slow or bad it is. They turned away from the righteous and immediately obvious path to complicate things for developers that wanted to ship cross-platform apps instead of Mac-only ones.

12 hours agotalldayo

The fact is that Intel, AMD and even Google (coming up with Renderscript), didn't gave a shit about making OpenCL something developers cared about.

5 hours agopjmlp

Would you be willing to share the deal with Apple/Khronos relations?

14 hours agogoogle234123

Apple didn't like OpenGL, rightfully, and came up with their own Metal which they released two years before first version of Vulkan was released.

Now people pretend that Apple is bad because it never adopted Vulkan and never implemented the "good modern OpenGL" (which never really existed).

14 hours agotroupo

It runs deeper than that, during the development of WebGPU it came to light that Apple was vetoing the use of any Khronos IP whatsoever, due to a private legal dispute between them. That led to WebGPU having to re-invent the wheel with a brand new shader language because Apples lawyers wouldn't sign off on using GLSL or SPIR-V under any circumstances.

The actual details of the dispute never came out, so we don't know if it has been resolved or not.

13 hours agojsheard

Apple, refusing to use open standards, and instead demanding everyone else do things their way? Say it’s not so!

13 hours agobinary132

The bizarre thing is that Apple did used to cooperate with Khronos, they were involved with OpenGL and even donated the initial version of the OpenCL spec to them. Something dramatic happened behind the scenes at some point.

13 hours agojsheard

My absurd pet theory is that this was related to their 2017-2020 dispute with Imagination. Apple started (allegedly) violating Imagination's IP in 2017. They were, at the very least, threatened with a lawsuit, and the threats were compelling enough that they've been paying up since 2020. It could be Apple pulled out of the Khronos IP pool to prepare a lawsuit, or to have better chances of dodging one.

8 hours agoferbivore

Most likely related to how Khronos managed OpenCL after getting hold of it.

13 hours agopjmlp

I really want them to get it together with OpenCL 3 and especially Vulkan interop but I’m not really holding out hope for it.

8 hours agobinary132

OpenCL 3 is OpenCL 1, no one cares, Intel has made extensions on too for DPC++, AMD is pushing Romc or whatever else they think of.

Still not showing that they care.

5 hours agopjmlp

Please, tell us all about how Khronos hurt Apple with free software that Apple had every opportunity to influence. Point to the boo-boo that justifies making things worse for everyone.

12 hours agotalldayo

My dear Apple has zero influence on Windows, Linux and Android.

Where are those great OpenCL implementations from Intel, AMD and Google?

5 hours agopjmlp

I can imagine a scenario: Apple donates openCL, then later suggests some changes for the next version. Khronos delays or pushes back and now openCL is stuck from Apple's perspective and they can't do anything about it.

8 hours agogoogle234123

Yep.

5 hours agopjmlp

Yes, obviously. It is an incredibly tiresome comment which is brought up every single time that Microsoft adopts any sort of open standard and it's never done with any particular insight into if this is one of the times that it'll be relevant.

14 hours agoplorkyeran

Has it ever not ended up being relevant? Like, I would agree that it is kind of redundant--and thereby maybe doesn't need to be said--but if there are people who actually think "maybe this time will be different", arguably the comment should be pinned to the top of the thread as a reminder?