🎤 Cheer for Your Idol · Gate Takes You Straight to Token of Love! 🎶
Fam, head to Gate Square now and cheer for #TokenOfLove# — 20 music festival tickets are waiting for you! 🔥
HyunA / SUECO / DJ KAKA / CLICK#15 — Who are you most excited to see? Let’s cheer together!
📌 How to Join (the more ways you join, the higher your chance of winning!)
1️⃣ Interact with This Post
Like & Retweet + vote for your favorite artist
Comment: “I’m cheering for Token of Love on Gate Square!”
2️⃣ Post on Gate Square
Use hashtags: #ArtistName# + #TokenOfLove#
Post any content you like:
🎵 The song you want to he
Robotics works quite similar to AI.
You need lots of high quality data to operate, except you can’t just scrape the internet for robotics data since it needs real world experience and variables.
There is no “Internet of robot actions.”
Ton’s of teams are working and throwing stupid money into humanoids as they’re the most obvious deca trillion dollar industry due to how efficient they’ll turn the labour force (more efficient than an Indian average wage at $50k USD each).
But the biggest race, like AI is:
1. Getting quality data
2. Training tasks
Foundation models are like LLMs in AI, but instead of generating text, they generate actions for robots.
There’s a couple different approaches teams are taking with task training, some using small high fidelity datasets with labelling like Figure and others are going for spray and pray with massive models.
The goal is to give robots a broad, pre trained common sense and the ability to generalize across tasks and environments.
Instead of programming a robot for each task, you train a giant model on diverse data (videos of humans, simulations, real robot demos, images with text descriptions of tasks etc), and the model learns an embodied understanding of the physical world.
You can then prompt the robot to do something (through a command or example), and the foundation model’s “knowledge” kicks in to handle it, like how you can ask ChatGPT anything.
So the big disconnect for a lot of these companies will be in the task training area, they’re currently deeply focused on the data side (world simulations, synthetic data, robot trajectories, human videos etc) as they need it to interact perfectly with the real world but there isn’t as much development with what the robots/humanoids can actually do.
Nvidia is leading one of the key foundation models (Issac GR00T) which they’ve fully open sourced. They’ve already had 3rd party teams building on top of this and significantly improving the efficiently (basically created a program for humanoids to clean up a room with minimal changes to the foundation models data).
So the big overlap with crypto x ai x robotics will mostly likely lie in this task training sector (like a robotics App Store) since the leading foundation models are already going open source and there will probably be large incentive models for indie developers to contribute and build cool programs/tasks for humanoids.
There’s a lot of progression and mainstream development coming end of year/early next year where I think robotics will have its “chatgpt” moment (Elon hard shilling his new humanoid models, viral videos of humanoids doing real world tasks, intuitional money flowing in, workforces being laid off etc).
I can promise you I’m not wrong on this idea, feels identical to AI in 2023. Matter of when, not if.
Don’t ignore one of the most innovative technological progressions to happen in our lifetime and don’t ignore $CODEC which is the only available play sitting at the overlap of this trend.