It’s a fun demo but they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense. It generates stylistically similar boxes, puts them on a grid, and lets you wander the spaces between?
I know progress happens in incremental steps, but this seems like quite the baby step from other world gen demos unless I’m missing something.
> they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense
These AI generated towns sure do seem to have strict building and civic codes. Everything on a grid, height limits, equal spacing between all buildings. The local historical society really has a tight grip on neighborhood character.
From the article:
> It would also be sound, with different areas connected in such a way to allow characters to roam freely without getting stuck.
Very unrealistic.
One of the interesting things about mostly-open world game environments, like GTA or Cyberpunk, is the "designed" messiness and the limits that result in dead ends. You poke at someplace and end up at a locked door (a texture that looks like a door but you can't interact with) that says there's absolutely nothing interesting beyond where you're at. No chance to get stuck in a dead end is boring; when every path leads to something interesting, there's no "exploration".
The other extreme, where you can go inside everywhere, turns out to be boring. Second Life has that in some well-built areas. If you visit New Babbage, the steampunk city, there's almost a square kilometer of city. Almost every building has a functional interior. There are hundreds of shops, and dozens of bars. You can buy things in the shops, and maybe have a simulated beer in a pub. If anyone was around, you could talk to them. You can open doors and walk up stairs. You might find a furnished apartment, an office, or just empty rooms.
Other parts of Second Life have roadside motels. Each room has a bed, TV, bathroom, and maybe a coffee maker, all of which do something. One, with a 1950s theme, has a vibrating bed, which will make a buzzing sound if you pay it a tiny fee. Nobody uses those much.
No plot goes with all this. Unlike a game, the density of interesting events is low, closer to real life. This is the fundamental problem of virtual worlds. Realistic ones are boring.
Amusingly, Linden Lab has found a way to capitalize on this. They built a suburban housing subdivision, and people who buy a paid membership get an unfurnished house.
This was so successful that there are now over 60,000 houses. There are themed areas and about a dozen house designs in each area. It's kind of banal, but seems to appeal to people for whom American suburbia is an unreachable aspiration. The American Dream, for about $10 a month.
People furnish their houses, have BBQs, and even mow their lawn. (You can buy simulated grass that needs regular mowing.)
So we have a good idea of the appeal of this.
No, the fundamental problem isn’t the recreation of real life. Rather it’s that real life isn’t mirrored in ways that are important like having agency to pull of systemic changes something I’m having a hard time articulating. What I can say is that Eve online pulls off certain aspects of this pretty well.
This comment kind of reminded me of a YouTube channel I completely adore. AnyAustin (https://www.youtube.com/@any_austin) has quite a few videos exploring and celebrating open world video games.
> when every path leads to something interesting, there's no "exploration"
While this sentence makes sense from current game design perspective, I have to say it strikes me as very unrealistic. Facing dead ends has always ruined the immersion for me.
Sounds like the AI accidentally implemented NIMBY style zoning.
This is potentially a lot more useful in creation pipelines than other demos (e.g. World Labs) if it uses explicit assets rather than a more implicit representation (gaussians are pretty explicit but not in the way we are used to working with in games etc...).
I do think Meta has the tech to easily match other radiance field based generation methods, they publish many foundational papers in this space and have Hyperscape.
So I'd view this as an interesting orthogonal direction to explore!
Thanks! That’s some nuance I absolutely missed
is there a working 'demo' I don't see one?
>It’s a fun demo but they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense.
that's 95% of existing video games. How many doors actually work in a game like Cyberpunk?
on a different note , when do us mere mortals get to play with a worldgen engine? Google/meta/tencent have shown them off for awhile but without any real feasible way for a nobody to partake; are they that far away from actually being good?
I would think the argument for this is that it would enable and facilitate more advanced environments.
There's also plenty of games with fully explorable environments, I think it's more of a scale and utility consideration. I can't think of what use I'd have for exploring an office complex in GTA other than to hear Rockstar's parodical office banter. But Morrowind had reason for it to exist in most contexts.
Other games have intrinsically explorable interiors like NMS, and Enshrouded. Elden Ring was pretty open in this regard as well. And Zelda. I'm sure there are many others. TES doesn't fall into this due to the way interiors are structured which is a door teleports you to an interior level, ostensibly to save on poly budget, which again, concerning scale is an important consideration in both terms of meaning and effort in-context.
This doesn't seem to be doing much to build upon that, I think we could procedurally scatter empty shell buildings with low-mid assets already with a pretty decent degree of efficiency?
There are a bunch of different approaches. Many are very expensive to run. You can play with the World Labs one, their approach is cheap to explore once generated (vs an approach that generates frame by frame).
The quality is currently not great and they are very hard to steer / work with in any meaningful way. You will see companies using the same demo scenes repeatedly because that's the one that looked cool and worked well.
[deleted]
This just seems like an engineered pipeline of existing GenAI to get a 3d procedurally generated world that doesn't even look SOTA. I'm really sorry to dunk on this for those that worked on it, but this doesn't look like progress to me. The current approach looks like a dead end.
An end-to-end _trained_ model that spits out a textured mesh of the same result would have been an innovation. The fact that they didn't do that suggests they're missing something fundamental for world model training.
The best thing I can say is that maybe they can use this to bootstrap a dataset for a future model.
I would simply spend $5 at an asset store for some blobby generic buildings, than orchestrating a 12-figure corporate debt bubble to build warehouses of rapidly depreciating rust that boils a lake in order to generate them, but I guess that's why I'm not a Business Genius.
Or spend that 5$ supporting folks like Quaternius who offers really cool low poly game assets. I wonder if 3D artists have the will to give away assets for free these days.
Wealth, not Will
I'd call this 3DAssetGen. It's not a world model and doesn't generate a world at all. Standard sweat-and-blood powered world building puts this to shame, even low-effort world building with canned assets (see rpg maker games).
It's not really a world no. It generates only a small square by the looks of it. And a world built out of squares will be annoying.
Still, it's a first effort. I do think AI can really help with world creation, which I think is one of the biggest barriers to the metaverse. When you see how much time and money it costs to create a small island world called GTA..
Last time I checked, the metaverse was all about people collaborating in the making of a shared world, and we already have this. Examples include minecraft and vrchat, both of which are very popular metaverses. I don't see how not having bot content generation is a barrier?
Then, let's say people are allowed to participate in a metaverse in which they have the ability to generate content with prompts. Does this mean they're only able to build things the model allows or supports? That seems very limiting for a metaverse.
I guess that doesn't matter in games where the world ultimately doesn't matter, it will be better procedural generation, but personally I adore games where the developers actually put effort into designing a world that is interesting to explore, where things are deliberately placed for story or gameplay mechanics reasons.
But I suppose AI could in theory reach the point where it understand the story/theme and gameplay of a game while designing a world.
But when anyone can generate a huge open world, who really cares, is the same as it is now, gotta make something that sticks out from the crowd, something notable.
[deleted]
Nowhere in the page does it state that's it's a world model.
It's called world gen.
I know nothing about games and game development, but comments INSTA-sticking up for bigCo is increasingly hilarious to me.
World generation is different than world modeling. It's like java versus javascript. I'm not sure why I bother with technical discussion on hacker news anymore.
My comment was too snarky. I take your point. Based on the discussion this capability is closer to a really cool automated asset pack than "building 3D worlds". My understanding of world modeling is towards AGI, and you're saying nobody implied this is world modeling.
You're right. But the criticism is that it's closer to 2D asset packs than it is to 3D worlds and you're being overly charitable to Meta and underly charitable to the community response.
edit: this is just my over sharing of why i downvoted you. I didn't intent for you to feel dismissed.
[deleted][deleted]
Panorama gen via 2d diffusion inpainting, to point cloud lifting to 3d, to 2d inpainting conditioned on rendered point clouds, to optimization of a 3d gaussian splatting scene. It's image gen stitched into 3D. Not a conceptual world model. I hate the ambiguity of the term.
It's funny, I clicked the link to the demo, but it 404s, then I tried googling Worldgen, and it turns out someone else has built the same thing in May and called it Worldgen as well. Looks like it does better at realistic 3D scenes compared to this.
That's pretty far from the same thing, their technique is a 2D image in a trenchcoat. It instantly falls apart if you move more than a foot or so from the original camera position.
The paper is quite good [0] there are some interesting details on tackling individual meshes.
10 years from now we might have games that generate entire worlds based on the unique story line that's customized for each playthrough. Maybe even endless stories.
Baldur's Gate 5 is going to be memorable!
The Elder's Scrolls could use this + Radiant AI for some neat quests when it improves.
Game studios are probably going to explore this in dungeon generators first where if things go wrong with the generation, not much is lost. Just exit and generate another.
They use the word "interactive" several times, and I kept expecting that to mean truly interactive i.e. the ability to open doors or pick up objects to use them, but it seems that they mean "interactive" to mean "able to view and explore from a first person perspective". By that definition any 3D model is interactive.
Having the technical knowhow to have an ai generate 3d models, but then generatively compositing those assets together into environments in a way that would have seemed overly simplistic to gamedevs 3 decades ago…
It's funny, I don't know if I see a use for it, and this feeling surprises me. Just as procedural maps bore me, I feel this will be similar in any use case I can think of. What I like is the perceived care behind every action. After the initial "wow" of the care put into that research, I don't think it will end up being a "wow" that scales—I don't know if I'm making myself clear.
Not sure what is going on but seems like meta is lacking behind other startups and other frontier models in this space. They invested most in Meta reality labs in the last decade more than any other company and they come up with such poor rendering while the competitors are making pretty cool real world demos. Meta should stop thinking of these as research projects and actually spend time building real products with proper 3d rendering.
I can see this working as a randomly generated map for some quick game, like the Worms games did in 2D.
But, having things feel strongly on a grid kind of ruins the feel. It's rare for every building to be isolated like that. I am guessing they had trouble producing neighboring buildings that looked like they could logically share a common wall or alleyway.
I loathe how meta.com makes my back button gray out in my browser. Stop trying to force me to stay, it is obnoxious.
These look a lot like World of Warcraft. I wonder how much of their training data they got from it.
It's definitely a step forward from that 'Minecraft world' gen tech demo that had no persistence of vision.
I can see it being useful for isolated Unity developers with a concept and limited art ability. Currently they would be likely limited to pixel games.
Every environment appears to be a miniature golf course version of reality. Was this a deliberate choice?
[deleted]
This is cool, but it seems much more like a 3d asset generation than the scene generation like World Labs.
WorldLabs' Marble creates a Gaussian Splat scene. It's a totally different technology.
it won't be long now till we see a vr star trek holodeck type thing.
> fully navigable, interactive 3D worlds that you can actually walk around and explore.
You can explore, but is there a single interesting thing to find?
Interesting example because no man's sky is by all accounts an extremely good game now. And the developers next game with tons of procedural generation is very highly anticipated.
Does it in fact create a world that reflects the prompt? Probably not, except in a vague way.
Any world you can summon into existence with a few words is by the laws of information theory going to be generic. An interesting world requires thousands of words to describe.
Can’t wait until entire triple A games are generated by a prompt. Hopefully in my lifetime.
It's weird, houses are almost all tall and too narrow.
My first thought was the comment in the thread from the other day about Zork and hooking up an AI image generator to that.
But, it looks like WorldGen has that slightly soulless art style they used for that Meta Zuckverse VR thing they tried for a while.
>My first thought was the comment in the thread from the other day about Zork and hooking up an AI image generator to that.
I have done this in the early GPT days with 'Tales of Maj'eyal' and to a lesser extent RimWorld.
It works great for games that have huge compendiums of world lore , bestiaries, etc.
With a roguelike you would just map tiles to 3D terrain and objects.
Instead of Zork, I would try with All Thing Devours, or Spiritwrak. There are libre games since forever and they are designed in Inform6 with all the source code being available, and the compiler and the English library it's free too and it's a really structured language for literal ingames objects mapped to programming (OOP) objects.
Thanks for the info.
so clearly soon I’ll be able to tell a story and watch it appear before my eyes
It’s a fun demo but they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense. It generates stylistically similar boxes, puts them on a grid, and lets you wander the spaces between?
I know progress happens in incremental steps, but this seems like quite the baby step from other world gen demos unless I’m missing something.
> they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense
These AI generated towns sure do seem to have strict building and civic codes. Everything on a grid, height limits, equal spacing between all buildings. The local historical society really has a tight grip on neighborhood character.
From the article:
> It would also be sound, with different areas connected in such a way to allow characters to roam freely without getting stuck.
Very unrealistic.
One of the interesting things about mostly-open world game environments, like GTA or Cyberpunk, is the "designed" messiness and the limits that result in dead ends. You poke at someplace and end up at a locked door (a texture that looks like a door but you can't interact with) that says there's absolutely nothing interesting beyond where you're at. No chance to get stuck in a dead end is boring; when every path leads to something interesting, there's no "exploration".
The other extreme, where you can go inside everywhere, turns out to be boring. Second Life has that in some well-built areas. If you visit New Babbage, the steampunk city, there's almost a square kilometer of city. Almost every building has a functional interior. There are hundreds of shops, and dozens of bars. You can buy things in the shops, and maybe have a simulated beer in a pub. If anyone was around, you could talk to them. You can open doors and walk up stairs. You might find a furnished apartment, an office, or just empty rooms.
Other parts of Second Life have roadside motels. Each room has a bed, TV, bathroom, and maybe a coffee maker, all of which do something. One, with a 1950s theme, has a vibrating bed, which will make a buzzing sound if you pay it a tiny fee. Nobody uses those much.
No plot goes with all this. Unlike a game, the density of interesting events is low, closer to real life. This is the fundamental problem of virtual worlds. Realistic ones are boring.
Amusingly, Linden Lab has found a way to capitalize on this. They built a suburban housing subdivision, and people who buy a paid membership get an unfurnished house. This was so successful that there are now over 60,000 houses. There are themed areas and about a dozen house designs in each area. It's kind of banal, but seems to appeal to people for whom American suburbia is an unreachable aspiration. The American Dream, for about $10 a month.
People furnish their houses, have BBQs, and even mow their lawn. (You can buy simulated grass that needs regular mowing.)
So we have a good idea of the appeal of this.
No, the fundamental problem isn’t the recreation of real life. Rather it’s that real life isn’t mirrored in ways that are important like having agency to pull of systemic changes something I’m having a hard time articulating. What I can say is that Eve online pulls off certain aspects of this pretty well.
This comment kind of reminded me of a YouTube channel I completely adore. AnyAustin (https://www.youtube.com/@any_austin) has quite a few videos exploring and celebrating open world video games.
Also related, YouTube channel Shesez https://youtube.com/@boundarybreak
Explores what’s outside the bounds in video games.
For example:
Off Camera Secrets | Goldeneye (N64) - Boundary Break https://youtu.be/Reaz4aKYci8
Hidden Secrets in GTA 3 https://youtu.be/xBpNWVDQ5QM
> when every path leads to something interesting, there's no "exploration"
While this sentence makes sense from current game design perspective, I have to say it strikes me as very unrealistic. Facing dead ends has always ruined the immersion for me.
Sounds like the AI accidentally implemented NIMBY style zoning.
This is potentially a lot more useful in creation pipelines than other demos (e.g. World Labs) if it uses explicit assets rather than a more implicit representation (gaussians are pretty explicit but not in the way we are used to working with in games etc...).
I do think Meta has the tech to easily match other radiance field based generation methods, they publish many foundational papers in this space and have Hyperscape.
So I'd view this as an interesting orthogonal direction to explore!
Thanks! That’s some nuance I absolutely missed
is there a working 'demo' I don't see one?
>It’s a fun demo but they never go into buildings, the buildings all have similar size, the towns have similar layouts, there’s numerous visual inconsistencies, and the towns don’t really make sense.
that's 95% of existing video games. How many doors actually work in a game like Cyberpunk?
on a different note , when do us mere mortals get to play with a worldgen engine? Google/meta/tencent have shown them off for awhile but without any real feasible way for a nobody to partake; are they that far away from actually being good?
I would think the argument for this is that it would enable and facilitate more advanced environments.
There's also plenty of games with fully explorable environments, I think it's more of a scale and utility consideration. I can't think of what use I'd have for exploring an office complex in GTA other than to hear Rockstar's parodical office banter. But Morrowind had reason for it to exist in most contexts.
Other games have intrinsically explorable interiors like NMS, and Enshrouded. Elden Ring was pretty open in this regard as well. And Zelda. I'm sure there are many others. TES doesn't fall into this due to the way interiors are structured which is a door teleports you to an interior level, ostensibly to save on poly budget, which again, concerning scale is an important consideration in both terms of meaning and effort in-context.
This doesn't seem to be doing much to build upon that, I think we could procedurally scatter empty shell buildings with low-mid assets already with a pretty decent degree of efficiency?
There are a bunch of different approaches. Many are very expensive to run. You can play with the World Labs one, their approach is cheap to explore once generated (vs an approach that generates frame by frame).
The quality is currently not great and they are very hard to steer / work with in any meaningful way. You will see companies using the same demo scenes repeatedly because that's the one that looked cool and worked well.
This just seems like an engineered pipeline of existing GenAI to get a 3d procedurally generated world that doesn't even look SOTA. I'm really sorry to dunk on this for those that worked on it, but this doesn't look like progress to me. The current approach looks like a dead end.
An end-to-end _trained_ model that spits out a textured mesh of the same result would have been an innovation. The fact that they didn't do that suggests they're missing something fundamental for world model training.
The best thing I can say is that maybe they can use this to bootstrap a dataset for a future model.
I would simply spend $5 at an asset store for some blobby generic buildings, than orchestrating a 12-figure corporate debt bubble to build warehouses of rapidly depreciating rust that boils a lake in order to generate them, but I guess that's why I'm not a Business Genius.
Or spend that 5$ supporting folks like Quaternius who offers really cool low poly game assets. I wonder if 3D artists have the will to give away assets for free these days.
Wealth, not Will
I'd call this 3DAssetGen. It's not a world model and doesn't generate a world at all. Standard sweat-and-blood powered world building puts this to shame, even low-effort world building with canned assets (see rpg maker games).
It's not really a world no. It generates only a small square by the looks of it. And a world built out of squares will be annoying.
Still, it's a first effort. I do think AI can really help with world creation, which I think is one of the biggest barriers to the metaverse. When you see how much time and money it costs to create a small island world called GTA..
Last time I checked, the metaverse was all about people collaborating in the making of a shared world, and we already have this. Examples include minecraft and vrchat, both of which are very popular metaverses. I don't see how not having bot content generation is a barrier?
Then, let's say people are allowed to participate in a metaverse in which they have the ability to generate content with prompts. Does this mean they're only able to build things the model allows or supports? That seems very limiting for a metaverse.
I guess that doesn't matter in games where the world ultimately doesn't matter, it will be better procedural generation, but personally I adore games where the developers actually put effort into designing a world that is interesting to explore, where things are deliberately placed for story or gameplay mechanics reasons.
But I suppose AI could in theory reach the point where it understand the story/theme and gameplay of a game while designing a world.
But when anyone can generate a huge open world, who really cares, is the same as it is now, gotta make something that sticks out from the crowd, something notable.
Nowhere in the page does it state that's it's a world model.
It's called world gen.
I know nothing about games and game development, but comments INSTA-sticking up for bigCo is increasingly hilarious to me.
World generation is different than world modeling. It's like java versus javascript. I'm not sure why I bother with technical discussion on hacker news anymore.
My comment was too snarky. I take your point. Based on the discussion this capability is closer to a really cool automated asset pack than "building 3D worlds". My understanding of world modeling is towards AGI, and you're saying nobody implied this is world modeling.
You're right. But the criticism is that it's closer to 2D asset packs than it is to 3D worlds and you're being overly charitable to Meta and underly charitable to the community response.
edit: this is just my over sharing of why i downvoted you. I didn't intent for you to feel dismissed.
Panorama gen via 2d diffusion inpainting, to point cloud lifting to 3d, to 2d inpainting conditioned on rendered point clouds, to optimization of a 3d gaussian splatting scene. It's image gen stitched into 3D. Not a conceptual world model. I hate the ambiguity of the term.
It's funny, I clicked the link to the demo, but it 404s, then I tried googling Worldgen, and it turns out someone else has built the same thing in May and called it Worldgen as well. Looks like it does better at realistic 3D scenes compared to this.
[0] https://worldgen.github.io/index.html
That's pretty far from the same thing, their technique is a 2D image in a trenchcoat. It instantly falls apart if you move more than a foot or so from the original camera position.
The paper is quite good [0] there are some interesting details on tackling individual meshes.
(couldn't cleanup the link at all sorry)
[0]: https://scontent-lhr6-2.xx.fbcdn.net/v/t39.2365-6/586830145_...
This is like GTP 2 of World Gen.
10 years from now we might have games that generate entire worlds based on the unique story line that's customized for each playthrough. Maybe even endless stories.
Baldur's Gate 5 is going to be memorable!
The Elder's Scrolls could use this + Radiant AI for some neat quests when it improves.
Game studios are probably going to explore this in dungeon generators first where if things go wrong with the generation, not much is lost. Just exit and generate another.
They use the word "interactive" several times, and I kept expecting that to mean truly interactive i.e. the ability to open doors or pick up objects to use them, but it seems that they mean "interactive" to mean "able to view and explore from a first person perspective". By that definition any 3D model is interactive.
Having the technical knowhow to have an ai generate 3d models, but then generatively compositing those assets together into environments in a way that would have seemed overly simplistic to gamedevs 3 decades ago…
It's funny, I don't know if I see a use for it, and this feeling surprises me. Just as procedural maps bore me, I feel this will be similar in any use case I can think of. What I like is the perceived care behind every action. After the initial "wow" of the care put into that research, I don't think it will end up being a "wow" that scales—I don't know if I'm making myself clear.
Not sure what is going on but seems like meta is lacking behind other startups and other frontier models in this space. They invested most in Meta reality labs in the last decade more than any other company and they come up with such poor rendering while the competitors are making pretty cool real world demos. Meta should stop thinking of these as research projects and actually spend time building real products with proper 3d rendering.
I can see this working as a randomly generated map for some quick game, like the Worms games did in 2D.
But, having things feel strongly on a grid kind of ruins the feel. It's rare for every building to be isolated like that. I am guessing they had trouble producing neighboring buildings that looked like they could logically share a common wall or alleyway.
I loathe how meta.com makes my back button gray out in my browser. Stop trying to force me to stay, it is obnoxious.
These look a lot like World of Warcraft. I wonder how much of their training data they got from it.
It's definitely a step forward from that 'Minecraft world' gen tech demo that had no persistence of vision.
I can see it being useful for isolated Unity developers with a concept and limited art ability. Currently they would be likely limited to pixel games.
Every environment appears to be a miniature golf course version of reality. Was this a deliberate choice?
This is cool, but it seems much more like a 3d asset generation than the scene generation like World Labs.
WorldLabs' Marble creates a Gaussian Splat scene. It's a totally different technology.
it won't be long now till we see a vr star trek holodeck type thing.
> fully navigable, interactive 3D worlds that you can actually walk around and explore.
You can explore, but is there a single interesting thing to find?
https://www.challies.com/articles/no-mans-sky-and-10000-bowl...
Interesting example because no man's sky is by all accounts an extremely good game now. And the developers next game with tons of procedural generation is very highly anticipated.
First steps towards the Holodeck.
Compare https://odyssey.ml/ another text conditioned world generator
Does it in fact create a world that reflects the prompt? Probably not, except in a vague way.
Any world you can summon into existence with a few words is by the laws of information theory going to be generic. An interesting world requires thousands of words to describe.
Can’t wait until entire triple A games are generated by a prompt. Hopefully in my lifetime.
It's weird, houses are almost all tall and too narrow.
My first thought was the comment in the thread from the other day about Zork and hooking up an AI image generator to that.
But, it looks like WorldGen has that slightly soulless art style they used for that Meta Zuckverse VR thing they tried for a while.
>My first thought was the comment in the thread from the other day about Zork and hooking up an AI image generator to that.
I have done this in the early GPT days with 'Tales of Maj'eyal' and to a lesser extent RimWorld.
It works great for games that have huge compendiums of world lore , bestiaries, etc.
With a roguelike you would just map tiles to 3D terrain and objects.
Instead of Zork, I would try with All Thing Devours, or Spiritwrak. There are libre games since forever and they are designed in Inform6 with all the source code being available, and the compiler and the English library it's free too and it's a really structured language for literal ingames objects mapped to programming (OOP) objects.
Thanks for the info.
so clearly soon I’ll be able to tell a story and watch it appear before my eyes
[dead]
[dead]