LLMs are what happens when you point a neural network at a lot of text training data.
You can argue that it’s technically accurate to call RNNs statistical pattern generators, that misses a lot of what makes them interesting.
“Statistical pattern generator” sort of implies that they’re similar to Markov Generators. That’s certainly not true since RNNs can easily maintain state.
It’s much more accurate to say that RNNs are sets extremely sophisticated non-linear functions and that the training algorithms (usually gradient descent) are maximization or minimization functions.