- To find others with a similar view to collaborate with
- Because you think this model should propogate
If the first is true, I have a similar view. I've been thinking about this a lot ever since an experience with gpt-4-base left me astounded at its apparent ability to infer traits of authors and forced me to decouple intelligence from 'goals over the world' or 'agency'. I've also been theorizing about training processes which could select for purely-'intelligence/creativity'-type systems without goals. I'm a fan of the predictive models agenda and evhub's posts.
I haven't published anything yet since it's hard for me to write + exfohazard concerns, but I've been meaning to find some collaborators to make progress with. If this interests you, here's my contact info: {discord: `quilalove`, matrix: `@quilauwu:matrix.org`, email: `quila1@protonmail.com`}. (I prefer text / can't do calls)
Thanks for sharing, would you be interested in providing feedback if I'm writing a more in-depth post?
Both of the reasons you listed are part of my motivation for writing this post, but the main reason was so I would have a "support" post that I could link to when mentioning points made here without derailing future posts.
I might be, depends on the post contents, so you're welcome to send one for feedback if you do.
Hm, well, that seems like a low-commitment way to assess collab potential. Which is fair of you. But do you have any advice for how I might find more people interested in discussing training stories for {non-agentic/predictive} SI?
My current plan is "write a document and share it with some friends/acquaintances in my online network, and hope it's convincing enough that we join forces"
May I ask why you wrote this? Guesses:
- To find others with a similar view to collaborate with
- Because you think this model should propogate
If the first is true, I have a similar view. I've been thinking about this a lot ever since an experience with gpt-4-base left me astounded at its apparent ability to infer traits of authors and forced me to decouple intelligence from 'goals over the world' or 'agency'. I've also been theorizing about training processes which could select for purely-'intelligence/creativity'-type systems without goals. I'm a fan of the predictive models agenda and evhub's posts.
I haven't published anything yet since it's hard for me to write + exfohazard concerns, but I've been meaning to find some collaborators to make progress with. If this interests you, here's my contact info: {discord: `quilalove`, matrix: `@quilauwu:matrix.org`, email: `quila1@protonmail.com`}. (I prefer text / can't do calls)
Thanks for sharing, would you be interested in providing feedback if I'm writing a more in-depth post?
Both of the reasons you listed are part of my motivation for writing this post, but the main reason was so I would have a "support" post that I could link to when mentioning points made here without derailing future posts.
I might be, depends on the post contents, so you're welcome to send one for feedback if you do.
Hm, well, that seems like a low-commitment way to assess collab potential. Which is fair of you. But do you have any advice for how I might find more people interested in discussing training stories for {non-agentic/predictive} SI?
My current plan is "write a document and share it with some friends/acquaintances in my online network, and hope it's convincing enough that we join forces"
(I wish I could just ignore the 'other human minds' (https://carado.moe/human-minds.html) bottleneck and focus solely on theory)