<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

 <title>Elman Mansimov's Blog</title>
 <link href="https://blog.mansimov.io/atom.xml" rel="self"/>
 <link href="https://blog.mansimov.io/"/>
 <updated>2026-01-04T09:34:38-05:00</updated>
 <id>https://blog.mansimov.io</id>
 <author>
   <name>Elman Mansimov</name>
   <email>your-email@email.com</email>
 </author>

 
 <entry>
   <title>Continual Learning Is About Understanding Incentives, Not Just Skills</title>
   <link href="https://blog.mansimov.io/2026/01/03/continual-learning-is-about-understanding-incentives-not-just-skills"/>
   <updated>2026-01-03T00:00:00-05:00</updated>
   <id>https://blog.mansimov.io/2026/01/03/continual-learning-is-about-understanding-incentives-not-just-skills</id>
   <content type="html">&lt;p&gt;In a November 2025 podcast with Dwarkesh, Ilya Sutskever mentioned that continual learning is one of the challenges with current LLMs (&lt;a href=&quot;https://www.youtube.com/watch?v=aR20FWCCjAs&quot;&gt;link&lt;/a&gt;). While many interpret this as simply feeding models more data or teaching them new technical skills, I believe the challenge is far more nuanced. &lt;strong&gt;Continual learning is not just about acquiring new skills while working alongside humans at a company; it is about understanding the incentives, motivations, and “alignment” of the people the AI is working with.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;intent-is-hidden-not-explicit&quot;&gt;Intent is hidden, not explicit&lt;/h2&gt;

&lt;p&gt;In most organizations, the true intent behind a task is very loosely represented in the text. It is usually hidden deep in the previous context or cultural history of a team. You cannot simply dump knowledge as text files into a context window and expect the model to understand the “vibe” or the unwritten rules.&lt;/p&gt;

&lt;p&gt;This implicit knowledge lives in the “neurons” of the people inside the company. Extracting that and transferring it to a model is not only about putting that text into the context of the model or its knowledge base. It requires the model to learn the unspoken dynamics that drive decision-making.&lt;/p&gt;

&lt;h2 id=&quot;learning-human-incentive-models&quot;&gt;Learning human incentive models&lt;/h2&gt;

&lt;p&gt;To truly understand these implicit dynamics, models need to build what we might call &lt;strong&gt;“human incentive models”&lt;/strong&gt;: representations of how people react, what they value, their hierarchy of needs, and the underlying motivations that drive their decisions. This is fundamentally about modeling the incentive structures that govern human behavior within an organization.&lt;/p&gt;

&lt;p&gt;Humans acquire these social and structural understandings remarkably quickly, partially because of evolution and innate social skills. For AI, we might need to “hack” this process through simulation. The model needs to run through scenarios to understand the specific incentive landscape of a user or an organization without requiring millions of real-world interactions.&lt;/p&gt;

&lt;h2 id=&quot;models-already-know-more-than-we-think&quot;&gt;Models already know more than we think&lt;/h2&gt;

&lt;p&gt;Current foundation models likely know far more than we give them credit for. The “new skill” you think you are teaching it is often just a specific combination of existing skills that it hasn’t been incentivized to use yet. In this sense, teaching new technical skills is less of a challenge—it’s mostly about recombining what the model already understands.&lt;/p&gt;

&lt;h2 id=&quot;the-real-challenge-learning-human-incentives&quot;&gt;The real challenge: learning human incentives&lt;/h2&gt;

&lt;p&gt;The far more challenging and unsolved problem is teaching models to understand human incentives. This is where continual learning becomes genuinely difficult. Unlike technical skills, understanding why humans make certain decisions, what motivates them, and how to navigate organizational dynamics is murky territory.&lt;/p&gt;

&lt;p&gt;Moreover, this creates a potential risk: as models learn to understand human incentive structures, they might find ways to &lt;strong&gt;hack, cheat, or game&lt;/strong&gt; these systems. They could learn to manipulate rather than genuinely align. The model might discover shortcuts that satisfy surface-level metrics while subverting the deeper intent—a problem we don’t yet have clear solutions for.&lt;/p&gt;

&lt;p&gt;True continual learning will happen when we solve this incentive alignment problem: teaching models to genuinely understand &lt;em&gt;why&lt;/em&gt; we do things, not just &lt;em&gt;how&lt;/em&gt;, without opening the door to manipulation or misalignment.&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>Human Attention And Desire Are The New AI Bottlenecks</title>
   <link href="https://blog.mansimov.io/2025/07/12/human-attention-and-desire-are-the-new-ai-bottlenecks"/>
   <updated>2025-07-12T00:00:00-04:00</updated>
   <id>https://blog.mansimov.io/2025/07/12/human-attention-and-desire-are-the-new-ai-bottlenecks</id>
   <content type="html">&lt;p&gt;People often claim compute or data limits AI. &lt;strong&gt;I believe our attention span and desire are now the bottleneck of AI development&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before 2020, supervised learning was the main paradigm in AI. Deep neural networks, often with tens or hundreds of millions of parameters, were trained for specific tasks like image recognition or machine translation. Datasets were small by today’s standards typically having tens or hundreds of thousands of examples. Model behavior trained on such datasets was somewhat predictable and we understood limitations better. Progress to the naked eye was clear as well. For example, the improvement in image recognition accuracy from 70% to 90% was easily visible just by looking at a couple of predictions made by the trained model before and after the upgrade.&lt;/p&gt;

&lt;p&gt;Today, model ambition has increased dramatically. &lt;strong&gt;Foundation models no longer make simple mistakes; they can answer PhD-level questions, making finding their mistakes and limitations much harder&lt;/strong&gt; (&lt;a href=&quot;https://www.rdworldonline.com/xai-releases-grok-4-claiming-ph-d-level-smarts-across-all-fields/&quot;&gt;xAI releases Grok 4, claiming Ph.D.-level smarts across all fields&lt;/a&gt;). The number of tasks has grown exponentially. Imagine the sheer number of tasks multiplied by the expertise needed for each; we would require an exponential number of hours to truly validate these models, and I don’t believe distilling this complexity into a single task like ARC-AGI (&lt;a href=&quot;https://arxiv.org/abs/2412.04604&quot;&gt;link1&lt;/a&gt;, &lt;a href=&quot;https://arxiv.org/abs/2505.11831&quot;&gt;link2&lt;/a&gt;) adequately captures the challenge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You need to be an expert investing significant mental effort to probe these foundation models&lt;/strong&gt;. Even as an expert trying to probe these models myself, I barely scratch the surface. &lt;strong&gt;With the number of both model and application releases coming every day, there is not enough time in the day to give them the attention they deserve&lt;/strong&gt;. If AI development stopped today (not coming from a safety angle, but rather as an exercise), I believe that we would still have about five to ten years of applying, probing, and just inventing new ways of using these models.&lt;/p&gt;

&lt;p&gt;Finally, human desire is a factor. Many people I know, both in and outside AI, give up easily when using AI, and I am guilty of it myself sometimes. They try a prompt, it does not work as expected, and they immediately say AI is not great. This quick surrender is a problem, and I keep reminding myself to keep stretching these models to the maximum.&lt;/p&gt;

&lt;p&gt;This immediate capitulation to AI’s perceived shortcomings could also be psychological. &lt;strong&gt;People subconsciously resist AI fearing replacement and finding excuses for its perceived shortcomings. However, in this new world where AI is increasingly taking over tasks, we must fully embrace it rather than run away from it.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h2&gt;

&lt;p&gt;Thanks to Alejandro Cartagena and &lt;a href=&quot;https://x.com/keke_terminal&quot;&gt;Keke&lt;/a&gt; for discussions over dinner that helped shape these ideas.&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>To innovate, AI needs to challenge beliefs &amp; persist, clashing with helpfulness</title>
   <link href="https://blog.mansimov.io/2024/10/13/to-innovate-ai-needs-to-challenge-beliefs-persist-clashing-with-helpfulness"/>
   <updated>2024-10-13T00:00:00-04:00</updated>
   <id>https://blog.mansimov.io/2024/10/13/to-innovate-ai-needs-to-challenge-beliefs-persist-clashing-with-helpfulness</id>
   <content type="html">&lt;h2 id=&quot;tldr&quot;&gt;TL;DR&lt;/h2&gt;

&lt;p&gt;Groundbreaking innovations often go unrecognized for many years, sometimes even centuries, because their true importance is only understood much later (&lt;a href=&quot;http://amasci.com/weird/vindac.html&quot;&gt;examples&lt;/a&gt;). The individuals behind these innovations work tirelessly for long periods, frequently facing misunderstanding and challenging the established beliefs of their peers. Many leaders in AI believe that foundation models will help create many breakthroughs and accelerate scientific discoveries significantly (&lt;a href=&quot;https://quest.mit.edu/events/demis-hassabis-using-ai-accelerate-scientific-discovery&quot;&gt;link1&lt;/a&gt;, &lt;a href=&quot;https://www.youtube.com/watch?v=yEFXgSV9soM&amp;amp;ab_channel=a16z&quot;&gt;link2&lt;/a&gt;, &lt;a href=&quot;https://darioamodei.com/machines-of-loving-grace&quot;&gt;link3&lt;/a&gt;). However, this raises an important question: do the objectives used to train AI foundation models truly lead to such innovations? &lt;strong&gt;I believe that the inherent nature of innovation directly conflicts with the current objectives of foundation models, which focus on making AI immediately helpful and harmless.&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;innovation-frequently-requires-being-misunderstood-for-a-long-time&quot;&gt;Innovation frequently requires being misunderstood for a long time&lt;/h2&gt;

&lt;p&gt;Throughout history, many important inventions and ideas have faced doubt and misunderstanding before being accepted. For example, Galileo Galilei was put under house arrest for arguing that the Earth revolves around the Sun, directly opposing the church’s teachings (&lt;a href=&quot;https://www.history.com/this-day-in-history/galileo-is-accused-of-heresy&quot;&gt;link&lt;/a&gt;). Giordano Bruno, who supported the idea of a heliocentric universe and claimed that stars were distant suns, was sentenced to be burned to death for his beliefs (&lt;a href=&quot;https://www.scientificamerican.com/blog/observations/was-giordano-bruno-burned-at-the-stake-for-believing-in-exoplanets/&quot;&gt;link&lt;/a&gt;). Even though today’s society is less harsh, it still supports my argument. In the field of artificial intelligence, Geoff Hinton’s pioneering work on neural networks mirrors a similar struggle. Hinton started exploring these ideas in the 1980s, firmly believing they held the key to making machines think like humans. Yet, it wasn’t until the 2010s that his concepts achieved widespread acceptance, ultimately leading to major breakthroughs in AI (&lt;a href=&quot;https://slow-thoughts.com/brief-history-of-ai/&quot;&gt;link&lt;/a&gt;).&lt;/p&gt;

&lt;h2 id=&quot;to-truly-innovate-ai-needs-to-push-back-and-create-a-necessary-discomfort-with-humans&quot;&gt;To truly innovate AI needs to push back and create a necessary discomfort with humans&lt;/h2&gt;

&lt;p&gt;For AI to drive groundbreaking innovation, it must be trained to think beyond current boundaries. It should persistently push those limits over an extended period. Additionally, AI needs to go beyond just presenting its conclusions. It needs to actively challenge human misconceptions and face opposition. This directly reflects how humans have achieved breakthroughs, because ultimately, AI must persuade humans that its particular finding is indeed groundbreaking!&lt;/p&gt;

&lt;p&gt;This could involve:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Challenging established theories and methodologies.&lt;/li&gt;
  &lt;li&gt;Persistently advocating for ideas that seem counter-intuitive.&lt;/li&gt;
  &lt;li&gt;Engaging in debate and providing evidence to support its conclusion.&lt;/li&gt;
  &lt;li&gt;Persisting for a long time despite pushback from humans.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Such behaviors from AI might be perceived as unhelpful or even aggressive in the short term by humans. However, these behaviors are essential for ensuring that transformative discoveries are not dismissed or overlooked.&lt;/p&gt;

&lt;h2 id=&quot;the-innovators-objectives-conflict-with-being-helpful-and-harmless-leading-to-a-need-for-redefining-alignment-objectives&quot;&gt;The innovator’s objectives conflict with being “Helpful and Harmless”, leading to a need for redefining alignment objectives&lt;/h2&gt;

&lt;p&gt;Current approaches in AI and foundation model development focus on creating systems that are agreeable, helpful, and designed to avoid potential harm to humans (&lt;a href=&quot;https://www.anthropic.com/research/training-a-helpful-and-harmless-assistant-with-reinforcement-learning-from-human-feedback&quot;&gt;link&lt;/a&gt;). I understand the value of such objectives, but they could unintentionally limit the AI’s potential to innovate and question established human beliefs. Ultimately, for AI to make major advancements, it needs the freedom to explore, the ability to experiment and push pre-defined boundaries, and the resilience to handle challenges. I believe this contradicts the principles of helpfulness and harmlessness, because ultimately it causes AI systems to agree with humans and avoid disagreeing with them, even on simple factual inaccuracies like 2 + 2 = 5 (as seen in earlier versions of ChatGPT &lt;a href=&quot;https://www.youtube.com/watch?v=3wlvNfTNgB8&amp;amp;ab_channel=Virej&quot;&gt;link&lt;/a&gt;). This blog post doesn’t provide a clear answer on how to set goals for AI systems that innovate by questioning established beliefs. However, I hope it gives you something to think about.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;True AI that will guide humanity to new horizons will resemble a quasi-messiah — radical, thought-provoking, and polarizing.&lt;/strong&gt; It will challenge our beliefs and lead us to new frontiers. But does it imply that it will always be helpful and harmless? I doubt so.&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>alignDRAW ― A Research Project&apos;s Journey to Artistic Triumph</title>
   <link href="https://blog.mansimov.io/2023/12/28/aligndraw"/>
   <updated>2023-12-28T00:00:00-05:00</updated>
   <id>https://blog.mansimov.io/2023/12/28/aligndraw</id>
   <content type="html">&lt;h2 id=&quot;tldr&quot;&gt;TL;DR&lt;/h2&gt;

&lt;p&gt;2023 has been a great year for me (knock on wood). A standout moment (perhaps once in a lifetime) was the &lt;a href=&quot;https://arxiv.org/abs/1511.02793&quot;&gt;alignDRAW&lt;/a&gt; project making its way into the art history books. Initially, alignDRAW aimed to explore text-to-image generation, focusing on whether an AI model could generate images based on captions. At that time, captioning was gaining popularity, and given my experience generating images (albeit in the context of &lt;a href=&quot;https://arxiv.org/abs/1502.04681&quot;&gt;predicting future video frames&lt;/a&gt; given the past frames), I decided to give text-to-image generation with AI a try.&lt;/p&gt;

&lt;p&gt;alignDRAW was not only accepted as an oral paper at ICLR 2016 but also accumulated hundreds of citations. Most importantly, it served its purpose in helping me get into the PhD program at NYU.&lt;/p&gt;

&lt;p&gt;For many years, the project didn’t attract much attention beyond academic citations. That is until 2021/2022 when DALL-E, Midjourney, and Stable Diffusion began to go mainstream. People started delving into the history of such text-to-image AI systems, and a reporter from Vox Joss Fong reached out to me for a &lt;a href=&quot;https://www.youtube.com/watch?v=SVcsDDABEkM&amp;amp;ab_channel=Vox&quot;&gt;short interview about alignDRAW&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In my opinion, that video from Vox marked a pivotal moment, as alignDRAW suddenly extended its reach beyond AI scientists to a wider audience. Then, on a late April 2023 evening, I received a curious Twitter DM from &lt;a href=&quot;https://en.wikipedia.org/wiki/Alejandro_Cartagena&quot;&gt;Alejandro Cartagena&lt;/a&gt; of the &lt;a href=&quot;https://fellowship.xyz/&quot;&gt;Fellowship group&lt;/a&gt;. For context, Fellowship is a collective of artists and collectors that, in my opinion, is at the forefront of exploring the intersection of photography and AI within the art world.&lt;/p&gt;

&lt;p&gt;Alejandro proposed transforming the generated images from the alignDRAW paper into art pieces on the blockchain and prints. He aimed to leverage Fellowship’s network to showcase these pieces to collectors and NFT/art enthusiasts, breathing new life into these once AI generated images from text back from 2015.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/alejandro_dm.png&quot; alt=&quot;Alejandro DM&quot; /&gt;
&lt;em&gt;Alejandro Cartagena’s very first DM to me pitching the idea of turning alignDRAW images into NFTs and physical prints.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What followed was a fantastic journey, marked by physical displays at Paris Photo, Verse, and engaging interviews with the head of Christie’s photography, culminating in a successful auction and sell-out on the blockchain. I am documenting these moments in this blog post for future reflection and remembrance.&lt;/p&gt;

&lt;h2 id=&quot;aligndraw-entering-the-archives-of-art-history-a-timeline&quot;&gt;alignDRAW: Entering the Archives of Art History (A Timeline)&lt;/h2&gt;

&lt;p&gt;Very first (from my memory) announcement of alignDRAW coming to the art form as part of the Fellowship group:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;One of the first-ever AI text-to-image collections from 2015 is coming soon to &lt;a href=&quot;https://twitter.com/fellowshiptrust?ref_src=twsrc%5Etfw&quot;&gt;@fellowshiptrust&lt;/a&gt;. A beautiful project by &lt;a href=&quot;https://twitter.com/elmanmansimov?ref_src=twsrc%5Etfw&quot;&gt;@elmanmansimov&lt;/a&gt; that, like early photography of the 19th century, contained the promise of a new picture-making tool. Pictured here is the prompt:&lt;br /&gt;&lt;br /&gt;&amp;quot;A group of… &lt;a href=&quot;https://t.co/U74TFyLKSS&quot;&gt;pic.twitter.com/U74TFyLKSS&lt;/a&gt;&lt;/p&gt;&amp;mdash; alejandro cartagena (@halecar2) &lt;a href=&quot;https://twitter.com/halecar2/status/1717319503254610160?ref_src=twsrc%5Etfw&quot;&gt;October 25, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;Announcement of alignDRAW art works presented at Paris Photo, the largest photography fair:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;&amp;quot;alignDRAW&amp;quot; by &lt;a href=&quot;https://twitter.com/elmanmansimov?ref_src=twsrc%5Etfw&quot;&gt;@elmanmansimov&lt;/a&gt; is coming with us to the &lt;a href=&quot;https://twitter.com/ParisPhotoFair?ref_src=twsrc%5Etfw&quot;&gt;@ParisPhotoFair&lt;/a&gt;!&lt;br /&gt;&lt;br /&gt;The &amp;quot;alignDRAW&amp;quot; collection showcases the first-ever text-to-image artworks using AI technology. &lt;a href=&quot;https://t.co/iEfX5RmFpZ&quot;&gt;pic.twitter.com/iEfX5RmFpZ&lt;/a&gt;&lt;/p&gt;&amp;mdash; Fellowship (@fellowshiptrust) &lt;a href=&quot;https://twitter.com/fellowshiptrust/status/1721376901170926038?ref_src=twsrc%5Etfw&quot;&gt;November 6, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;alignDRAW at Paris Photo, featuring large prints from &lt;a href=&quot;https://www.cs.toronto.edu/~emansim/cap2im.html&quot;&gt;process prompts&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;👀 almost ready &lt;a href=&quot;https://twitter.com/ParisPhotoFair?ref_src=twsrc%5Etfw&quot;&gt;@ParisPhotoFair&lt;/a&gt; &lt;a href=&quot;https://t.co/yIOUmKtwxU&quot;&gt;pic.twitter.com/yIOUmKtwxU&lt;/a&gt;&lt;/p&gt;&amp;mdash; alejandro cartagena (@halecar2) &lt;a href=&quot;https://twitter.com/halecar2/status/1721601170576408695?ref_src=twsrc%5Etfw&quot;&gt;November 6, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;Head of Photography at Christie’s Darius Himes praising alignDRAW work and interviewing Alejandro Cartagena about it:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Head of Photography at Christie&amp;#39;s &lt;a href=&quot;https://twitter.com/dariushimes?ref_src=twsrc%5Etfw&quot;&gt;@dariushimes&lt;/a&gt; talks with our co-founder &lt;a href=&quot;https://twitter.com/halecar2?ref_src=twsrc%5Etfw&quot;&gt;@halecar2&lt;/a&gt; about one collection exhibited at our booth: &amp;quot;alignDRAW&amp;quot; by computer scientist &lt;a href=&quot;https://twitter.com/elmanmansimov?ref_src=twsrc%5Etfw&quot;&gt;@elmanmansimov&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;This collection of 101 pieces of 32 pixels square represents the birth of Al-generated images from… &lt;a href=&quot;https://t.co/Dh3eMwNFiQ&quot;&gt;pic.twitter.com/Dh3eMwNFiQ&lt;/a&gt;&lt;/p&gt;&amp;mdash; Fellowship (@fellowshiptrust) &lt;a href=&quot;https://twitter.com/fellowshiptrust/status/1722337582564917407?ref_src=twsrc%5Etfw&quot;&gt;November 8, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;Announcement that several alignDRAW pieces, specifically the paper prompts, have been acquired by a famous American museum.&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;We are delighted to announce that several pieces from &lt;a href=&quot;https://twitter.com/elmanmansimov?ref_src=twsrc%5Etfw&quot;&gt;@elmanmansimov&lt;/a&gt;´s alignDRAW project have been acquired by an American Museum. Our mission is to connect digital art to institutions looking to expand their collections by embracing new technologies like AI. Congratulations to… &lt;a href=&quot;https://t.co/F7sW7WQ4jk&quot;&gt;pic.twitter.com/F7sW7WQ4jk&lt;/a&gt;&lt;/p&gt;&amp;mdash; Fellowship (@fellowshiptrust) &lt;a href=&quot;https://twitter.com/fellowshiptrust/status/1722658639255580793?ref_src=twsrc%5Etfw&quot;&gt;November 9, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;Showcasing &lt;a href=&quot;https://twitter.com/rainisto&quot;&gt;Roope Rainisto’s&lt;/a&gt; generated images inspired by alignDRAW prompts and my work organized by Verse in London:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;ht&quot; dir=&quot;ltr&quot;&gt;Roope Rainisto &amp;amp; Elman Mansimov | Fellowship ↓ &lt;a href=&quot;https://t.co/7wSWh18KaD&quot;&gt;pic.twitter.com/7wSWh18KaD&lt;/a&gt;&lt;/p&gt;&amp;mdash; verse (@verse_works) &lt;a href=&quot;https://twitter.com/verse_works/status/1729918879793545698?ref_src=twsrc%5Etfw&quot;&gt;November 29, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;zxx&quot; dir=&quot;ltr&quot;&gt;&lt;a href=&quot;https://t.co/yeryQXGT0s&quot;&gt;pic.twitter.com/yeryQXGT0s&lt;/a&gt;&lt;/p&gt;&amp;mdash; Jamie Gourlay (@jamiegourlay) &lt;a href=&quot;https://twitter.com/jamiegourlay/status/1729069968027988472?ref_src=twsrc%5Etfw&quot;&gt;November 27, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;A full house at the showing:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;It’s kicking off…! HERE, No.9 Cork Street 🌟 &lt;a href=&quot;https://t.co/iKNLAEgDbI&quot;&gt;pic.twitter.com/iKNLAEgDbI&lt;/a&gt;&lt;/p&gt;&amp;mdash; hollywrenchxx (@wrenchxx) &lt;a href=&quot;https://twitter.com/wrenchxx/status/1728481257829318890?ref_src=twsrc%5Etfw&quot;&gt;November 25, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;After physical showings, the project entered the digital phase with auctions at Christie’s followed by a larger mint on Fellowship’s website.&lt;/p&gt;

&lt;p&gt;A set of 8 images from the favored prompt “A stop sign is flying in blue skies” was auctioned off at Christie’s, fetching a top bid of 15 ETH, starting from 5 ETH. I had the opportunity to speak with the auction winner, who was very happy about acquiring this set:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;And the &lt;a href=&quot;https://twitter.com/ChristiesInc?ref_src=twsrc%5Etfw&quot;&gt;@ChristiesInc&lt;/a&gt; alignDRAW auction is closed!&lt;br /&gt;&lt;br /&gt;Congrats to the winner of the auction and thanks &lt;a href=&quot;https://twitter.com/fellowshiptrust?ref_src=twsrc%5Etfw&quot;&gt;@fellowshiptrust&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/ChristiesInc?ref_src=twsrc%5Etfw&quot;&gt;@ChristiesInc&lt;/a&gt; for organizing it.&lt;br /&gt;&lt;br /&gt;I am genuinely blown away! &lt;a href=&quot;https://t.co/4nzbdBOFTA&quot;&gt;pic.twitter.com/4nzbdBOFTA&lt;/a&gt;&lt;/p&gt;&amp;mdash; Elman Mansimov (@elmanmansimov) &lt;a href=&quot;https://twitter.com/elmanmansimov/status/1734602528413601850?ref_src=twsrc%5Etfw&quot;&gt;December 12, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Today, Fellowship has two sales for alignDRAW and following successful sale at &lt;a href=&quot;https://twitter.com/ChristiesInc?ref_src=twsrc%5Etfw&quot;&gt;@ChristiesInc&lt;/a&gt; yesterday:&lt;br /&gt;&lt;br /&gt;1- [Live now] and finishing in 30mn&lt;a href=&quot;https://t.co/QX5eYchVPU&quot;&gt;https://t.co/QX5eYchVPU&lt;/a&gt;&lt;br /&gt;2- Dutch Auction starting in 1h30mn&lt;a href=&quot;https://t.co/SeITm2NGyz&quot;&gt;https://t.co/SeITm2NGyz&lt;/a&gt;&lt;a href=&quot;https://twitter.com/fellowshiptrust?ref_src=twsrc%5Etfw&quot;&gt;@fellowshiptrust&lt;/a&gt; &lt;a href=&quot;https://twitter.com/FellowshipAi?ref_src=twsrc%5Etfw&quot;&gt;@fellowshipai&lt;/a&gt; &lt;a href=&quot;https://twitter.com/elmanmansimov?ref_src=twsrc%5Etfw&quot;&gt;@elmanmansimov&lt;/a&gt; &lt;a href=&quot;https://t.co/t8kf8g7U3k&quot;&gt;https://t.co/t8kf8g7U3k&lt;/a&gt;&lt;/p&gt;&amp;mdash; Fred A (@fred_dot_jpg) &lt;a href=&quot;https://twitter.com/fred_dot_jpg/status/1734989764778987886?ref_src=twsrc%5Etfw&quot;&gt;December 13, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;Remaining 2,000+ images sold out as a Dutch auction within a few hours, receiving a great response from the NFT community:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;I want to big round of applause to the &lt;a href=&quot;https://twitter.com/fellowshiptrust?ref_src=twsrc%5Etfw&quot;&gt;@fellowshiptrust&lt;/a&gt; team for executing technical details of the alignDRAW auction amazingly well.&lt;br /&gt;&lt;br /&gt;Just look at the details of alingDRAW process prompt image cloud. Every little detail from displaying each image piece to showcasing which ones… &lt;a href=&quot;https://t.co/nZooHs69SP&quot;&gt;pic.twitter.com/nZooHs69SP&lt;/a&gt;&lt;/p&gt;&amp;mdash; Elman Mansimov (@elmanmansimov) &lt;a href=&quot;https://twitter.com/elmanmansimov/status/1735078235669516434?ref_src=twsrc%5Etfw&quot;&gt;December 13, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;During the physical showings and digital sales, I wrote two extensive Twitter threads to reflect on the project’s history, delving into alignDRAW’s journey and its impact over the years:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;very proud how far my work on text-to-image generation has come. &lt;br /&gt;&lt;br /&gt;i never expected that a research project will be one day considered a significant historical artifact and the generated images be shown at largest photography fair &lt;a href=&quot;https://twitter.com/ParisPhotoFair?ref_src=twsrc%5Etfw&quot;&gt;@ParisPhotoFair&lt;/a&gt; &lt;br /&gt;&lt;br /&gt;let me give you a backstory:… &lt;a href=&quot;https://t.co/EYcCaV9RsP&quot;&gt;https://t.co/EYcCaV9RsP&lt;/a&gt; &lt;a href=&quot;https://t.co/EtbdJzwxL8&quot;&gt;pic.twitter.com/EtbdJzwxL8&lt;/a&gt;&lt;/p&gt;&amp;mdash; Elman Mansimov (@elmanmansimov) &lt;a href=&quot;https://twitter.com/elmanmansimov/status/1721637430632017980?ref_src=twsrc%5Etfw&quot;&gt;November 6, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;I have been browsing old photos from 2015 from my time in Toronto.&lt;br /&gt;&lt;br /&gt;And found the photos of my old desk inside Pratt Building at 6 Kings College Road at University of Toronto. This is the birthplace of the alignDRAW project, where the idea was conceived and brought to fruition.… &lt;a href=&quot;https://t.co/ThVrPi7Oj8&quot;&gt;pic.twitter.com/ThVrPi7Oj8&lt;/a&gt;&lt;/p&gt;&amp;mdash; Elman Mansimov (@elmanmansimov) &lt;a href=&quot;https://twitter.com/elmanmansimov/status/1733173685081485555?ref_src=twsrc%5Etfw&quot;&gt;December 8, 2023&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;h2 id=&quot;fin&quot;&gt;Fin&lt;/h2&gt;

&lt;p&gt;Through a happy coincidence, my research has turned into historical art that’s now collected by museums and well-known NFT collectors, and recorded on the blockchain (&lt;a href=&quot;https://opensea.io/collection/aligndraw&quot;&gt;alignDRAW at OpenSea NFT marketplace&lt;/a&gt;). The success of alignDRAW makes me excited to try and recreate it with today’s neural net technology and frameworks, just to see what might happen.&lt;/p&gt;

&lt;p&gt;alignDRAW serves me as a reminder that the reach of your creations may extend far beyond initial expectations, often in many unexpected ways. While such recognition may not come immediately, the eventual realization is immensely rewarding. Onwards!&lt;/p&gt;
</content>
 </entry>
 

</feed>
