This is the potential development in AI I’m most interested in. So naturally, I tested this when I first used ChatGPT. In classic ChatGPT fashion, when asked to make a directed acyclic graph representing cause and effect, it could interpret that well enough to make a simple graph…but got the cause and effect flow for something as simple as lighting a fire. Haven’t tried it again with ChatGPT-4 though.
That’s definitely a better graph visually. (The image capability is cool, the graph I got on the earlier model was in text form). But I think it is wrong - “prepare (tinder, kindling, and fuel wood)” are all redundant to each other. Plus there’s a direct link from “prepare tinder wood” to “maintain fire” - if this is a causal diagram indicating the sequence of actions a person needs to take, "prepare wood " should link to “light fire”. I don’t have a record of the exact prompts I was using, but I was working more with the fact that oxygen. fuel, and heat are all necessary but independent preconditions for a fire to start.