Self Revealing Systems

Dear OpenAI Design Team,


I think that you are doing a wonderful job. I’d like to offer a suggestion building upon a concept from 1995 by Jakob Nielsen known as "Progressive Disclosure." In essence, this interaction design pattern entails gradually unveiling complexity and features to users, much like slowly unraveling the capabilities of a video game—those familiar with video games may appreciate this reference.

As we venture into the next iteration of user-AI interaction, I propose what I call “Self-Revealing Systems.” Imagine an AI that not only responds to our commands but also observes and adapts to our utilization patterns. Should it detect underutilization of its vast capabilities, it could offer a nudge, perhaps inquiring if we'd like to explore a particular feature further. Consider this dialogue:


System: I see that your question could be answered with multiple data points. Did you know I can format my response to enhance clarity? Would you be interested in learning more about this?

Me: Oh, I didn’t know that. What kind of formatting could you provide?

Then it could go back and forth with the question and answers. Based on my response, the system would understand how open I am to advice and would know how frequently I would like to get advice. It would also know if it had already told me about something and to what degree of detail I need an explanation. 

Just think of it as a relationship. The longer you get to know someone that you care about and that you went to get along well with, the more effective and pleasant the communication is. We could better learn about what each other can do and we can better support each other. I have had colleagues that I have worked really close with, where after a couple of years working together we only need to say a couple of words and whole world of meaning is behind the message, that we both understand. That is a future that I find really exciting. At the same time, I can’t always imagine what someone else is capable of. A good colleague can read into my intent and come back with new suggestions that include a skill that I didn’t know that they have. For example, “Hey, John. What would be really great is if I were to make a short 3D animation as an explainer for this new product.” This happened to me and I was blown away that the colleague not only could do 3D animation, but that he was really good and fast.

Why should learning AI systems be any different? These systems should become partners in discovery, seamlessly integrating into our workflow without the necessity for exhaustive study.

There is so much that the natural language interface does well. But, we are just scratching the surface of what will be possible. If you do start implementing any of these features and would like some feedback, drop me a note. 

Cheers,

John

Previous
Previous

Why do Most IT Projects Fail to Meet Their Goals?

Next
Next

Pulling The Plug Early: The Case for Early Termination in Product Development