ChatGPT’s memory is better than ever, but it’s not perfect If you’ve been using ChatGPT for a while, you’ve probably told it a lot about yourself. Maybe you’ve asked for help editing blog posts, shared the details of your dream side project, or even started treating it like a therapist – although we recommend you think twice before using it to make big life decisions. But fast-forward a few days, and sometimes it’s forgotten all of those important details about you. As if it never helped you write a business plan, sketch out a fitness routine, or navigate heartbreak earlier in the week. If you’ve ever wondered why ChatGPT can sound so smart yet forget who you are from one day to the next, you’re not alone. It’s one of the most common questions people ask, and the answer lies in how its memory works. You may like Unless you’ve actively turned ChatGPT’s memory features off, newer versions can now remember far more than they used to, including useful insights from previous chats. But that doesn’t mean it remembers everything or remembers even essential details consistently. In fact, figuring out what it remembers and when takes a little understanding and a bit of training on your part, too. Unless you’ve actively turned ChatGPT’s memory features off, newer versions can now remember far more than they used to, including useful insights from previous chats. But that doesn’t mean it remembers everything or remembers even essential details consistently. In fact, figuring out what it remembers and when takes a little understanding and a bit of training on your part, too. Memory, upgraded ChatGPT used to have a short-term memory problem, especially if you were using the free version. It could only work with what’s known as a “context window,” meaning it could respond based on your current message and some of your previous messages, but not many. As soon as the chat ended, lots of details were gone. But earlier this year, OpenAI introduced a more advanced memory system, and it’s become much more powerful. If you’re using the paid version of ChatGPT, like Plus or Pro, memory may already be switched on by default. That means ChatGPT can now remember things like your name, tone preferences, ongoing projects, and recurring goals. It can also use that context to personalize future chats, so your experience feels smoother and more relevant over time. You can also check what it remembers. Just head to Settings > Personalisation > Memory > Manage Memories to see, update, or delete anything it’s saved. You can also ask ChatGPT directly what it remembers, using a prompt like: “What do you remember about me?” Though it may not lay out all of the memories as clearly. You can also request that it forget or update something. Or you can switch to the new Temporary Chat mode if you don’t want it to remember anything at all or draw from past conversations. This option is located in the top right-hand corner of the dashboard. There are two key parts to memory. First, there are “saved memories”, which are things you or the system have explicitly stored, like “my name is Becca” or “I have a newsletter about the future.” Then there are “chat history” memories, which are patterns and preferences ChatGPT infers from previous conversations. Together, these give ChatGPT a more consistent and useful memory. Go to Settings > Personalization > Memory, where you can toggle both of these memory types on or off. It’s helpful, but it’s not perfect ut ChatGPT’s memory is still evolving, and it’s far from perfect. Just because both types of memory are switched on doesn’t mean it will remember everything or get it right every time. Experiments show that it can be hard to predict which details it retains from your chat history. It also won’t recall specific prompts unless they’re saved or repeated often. Unless you go back into a chat, it can’t access full transcripts either. And it won’t remember your workflow or creative process unless you intentionally teach it – even then, you may need to remind it now and again. In other words, memory helps, but it doesn’t make ChatGPT omniscient. If you’re working on long-term projects or recurring tasks, it’s still worth re-sharing important context at the start of each session. The reason ChatGPT doesn’t remember everything is that the tech isn’t there yet, but it’s also an attempt to give you some privacy – or at least the illusion of it. Memory in AI is a delicate balancing act. OpenAI wants ChatGPT to feel useful and indispensable – the more it remembers, the more helpful it’ll be and the less likely you’ll leave. But it also doesn’t want it to feel invasive or creepy. That’s why memory is limited and easy to access. It’s designed to make the system safer, more transparent, and more ethical. Or at least that’s what OpenAI says. Teaching ChatGPT to remember However, there are ways to work even more smoothly with ChatGPT’s memory. You can use “custom instructions” to set your preferred tone, format, or goals, even if memory is turned off. You’ll find these in Settings > Personalization > Custom instructions. If you’re not sure if ChatGPT will commit something important to memory, you can often simply tell it to. During a chat, you can type something like: “Remembe,r I like to write in US English.” It’ll usually respond acknowledging this preference, or an “Updated saved memory” tag will appear above its response. You can summarize your context at the start of a session, too. You can also use project-based prompts to keep things on track. And if you’re not sure what ChatGPT remembers, you can always check your memory settings at any time, make edits or clear the slate entirely. ChatGPT’s memory is evolving fast. It’s learning how to build more useful continuity across time and conversations. But it’s not perfect. So, for now, think of ChatGPT like a helpful assistant who occasionally forgets stuff. It’s impressive in the
Samsung & Google’s Android XR Glasses Preview
After months of anticipation, Google has finally offered a glimpse into its next big leap in wearable tech—a prototype Android XR smart glasses built in collaboration with Samsung. Unveiled at Google I/O, the live demonstration showcased not just the sleek potential of the device, but also the smart integration of artificial intelligence and real-time functionality, setting the stage for the next generation of smart eyewear. At UXDLab, we’re always curious about where emerging technology is heading—especially when it intersects with user experience in everyday environments. And this early version of Android XR glasses might just be a big step toward making augmented reality a truly useful part of daily life. Smart Glasses That Don’t Look Like Tech Unlike earlier bulky AR wearables, these smart glasses resemble a pair of conventional black frames. Although slightly thicker at the temples, the design doesn’t scream “gadget.” The internal tech is subtle—there’s a discreet screen embedded in the lens that displays essential information such as time, weather, or notifications without obstructing the user’s view. When tested, tapping a button on the right stem triggered a photo capture, which momentarily expanded into view. This interaction felt intuitive—more immediate and immersive than the tap-to-capture feature on screenless devices like Meta’s Ray-Bans. An AR Experience That Feels Natural One of the most impressive elements was how natural the smart glasses felt during navigation demos. Rather than forcing users to look down at their phone or smartwatches, directions appeared seamlessly at the top of the field of vision. A Google rep also demonstrated how users could glance down slightly to access a mini moving map, which responded to head movements—creating a more integrated and less disruptive experience for navigation. This kind of heads-up experience highlights what makes wearable UX different: minimal intrusion, maximum contextual utility. Powered by Gemini AI What truly elevated the prototype was its integration with Gemini, Google’s conversational AI assistant. Much like what we’ve seen with Project Moohan—the mixed-reality headset also running Android XR—Gemini’s capabilities felt futuristic yet grounded in practical value. The assistant responded to questions with audio feedback, identified artworks, reviewed books, and provided shopping options, all via a voice-controlled, screen-assisted interface. The convergence of voice, vision, and context-aware interaction points to a powerful new paradigm in personal computing. A Platform Approach with Broad Possibilities While the current Samsung prototype may not win design awards just yet, the Android XR platform itself looks promising. Google is opening it up to other brands like Warby Parker, X-Real, and Gentle Monster, all of whom are expected to offer their own takes on smart glasses. This diversity could encourage innovation in both form and function—something the wearable tech space sorely needs. With developer access launching later this year, the Android XR ecosystem is about to get a lot more exciting. And as the platform matures, so too will the design, usability, and consumer accessibility of these devices. Conclusion At UXDLab, we believe the future of user experience lies in making technology feel invisible—seamlessly enhancing our world without pulling us out of it. The Android XR smart glasses prototype is a big step in that direction. Whether it’s for navigation, capturing moments, translating languages, or accessing AI insights on the go, this emerging category could redefine how we interact with digital content. We’re keeping a close eye on what’s next—from Samsung’s final release to Meta’s and Apple’s responses. With powerful AI assistants like Gemini becoming wearable, the lines between physical and digital worlds are blurring faster than ever—and the opportunities for innovative UX design have never been greater. Source: (Techradar)