<?xml version="1.0"?>
<rss version="2.0"><channel><title>Madsen-Art</title><link>https://www.throneofgeeks.com/blogs/blog/16-madsen-art/</link><description/><language>en</language><item><title>Suno AI Song Syntax</title><link>https://www.throneofgeeks.com/blogs/entry/316-suno-ai-song-syntax/</link><description><![CDATA[<p>Suno uses Chirp to generate songs, which is a generative model. We must create documents that prompt the model correctly. To do this, we use specific tags to break the songs up into sections that the model understands. </p><h3><strong>Base Tags</strong></h3><p>This list represents all of the valid base tags. Each of these tags can be modified and has several ways they can be used, but in general these are the only valid tags</p><p>- [Intro]<br>- [Hook]<br>- [Pre-Chorus]<br>- [Chorus]<br>- [Verse]<br>- [Interlude]<br>- [Break]<br>- [Movement]<br>- [Instrumental]<br>- [Solo]<br>- [Build]<br>- [Bridge]<br>- [Outro]<br>- [End]<br>- [&lt;vocals&gt;] (clarification later)<br>- [&lt;specific instrument&gt;] (clarification later)</p><h4>[Intro]</h4><p>The intro tag should generally only be used at the beginning and is strictly instrumental. It can be modified with several adjectives. For instance:</p><p>- [Long Mellow Intro]<br>- [Short Exciting Intro]<br>- [Dreamy Slow Intro]</p><p>As you can see from the examples, you can generally add an emotive and/or a pacing adjective. The system doesn't always honor the intention, but it tends to work best if you use very direct, concrete adjectives that are salient to musical construction (speed, emotion, intensity seem to work best). Modifiers are not strictly necessary, but can be useful for establishing the mood early on.</p><h4>[Hook]</h4><p>Generally not necessary unless you modify it, more or less treated like an intro. Can be used to transition from intro to main part of song, particularly if the intro is different.</p><h4>Pre-Chorus]</h4><p>This is a strictly vocal tag that is often used at the beginning of songs to sort of introduce the story or narrative. It may or may not be sung (could be spoken, can be specified). This tag should generally only be used once or before chorus tags.</p><p>- [Haunting Whispered Pre-Chorus]<br>- [Staticky Spoken Pre-Chorus]<br>- [Primal Scream Pre-Chorus]<br>- [Opera Female Pre-Chorus]</p><p>These modifiers, while not strictly required, can confer very specific feels. As with intro, emotive, intensity, and pacing adjectives tend to work well, with the added option of singing styles, gender, and so on. </p><h4>[Chorus]</h4><p>This is about what you'd expect for any song construction. Generally speaking, the Chirp system decides how to render the chorus, so modifiers are often NOT honored, for whatever reason. The documentation says that the chosen style and lyrics tend to do more to modify how the chorus is sung. However, very concrete modifiers are most likely to be honored. This tag is one of the prime workhorses.</p><p>- [Whispered Chorus]<br>- [Eerie Chorus]<br>- [Ensemble Chorus]<br>- [Slow Chorus]</p><p>Lyric construction also has a huge impact on how the chorus is sung, and is often more important once the construction has been set up correctly. The system seems to ignore capitalization, but the vibe of the lyrics has an impact. Punctuation seems to have a larger impact</p><p>- Elipsis... this tends to make the system approach it more slowly, particularly if it's used... multiple times... in the line...<br>- Exclamation! this doensn't often have a huge impact but it can tell the system to emphasize a line<br>- Oooooohhh whoaaa ahhhh! vocalizations generally work extremely well to amp up a chorus (the system will not render non-word vocalizations without explicitly being told)<br>- mmmmmmmmmmm oh... gentler vocalizations will have a similar dampening effect<br>- (parenthesis lyrics) this does really well, and seems to do really good at making it do call-and-response or antiphonal</p><h4>[Verse]</h4><p>This tag is the other primary workhorse and is used pretty much identically to [Chorus]. It is not strictly necessary to modify it, and in many cases the system will decide how to modify it (consider that in most of it's training data, chorus and verse are left plain). So again, the content of the lyrics tend to do more. The system seems particularly sensitive to where in the song it is, meaning that if the generator feels like it's more likely a crescendo rather than a key change, the system will make that judgment call. However, emotive, intensity, and pace modifiers tend to work. </p><p>- [Angry Verse]<br>- [Mysterious Verse]<br>- [Whispered Verse]<br>- [Spoken Verse]<br>- [Opera Verse]</p><p>These concrete modifiers are most likely to be honored. The same lyrical modifiers from the chorus also apply</p><p>- Elipsis...<br>- Exclamation!<br>- Vocalizaaaaaaaaaations<br>- (Parenthesis)</p><p>For example:</p><p>\```<br>I am the void between stars<br>(Beyond the veil of forms...)<br>I am the death of light<br>(Where your deepest terrors remain...)<br>\```</p><h4>[Interlude]</h4><p>This is one of the main workhorses of the instrumental tags. It's pretty much what you'd expect. Many modifiers don't seem to impact this tag, but a few tend to be more reliable.</p><p>- [Melodic Interlude] this one is pretty reliable, so similar modifiers should work.<br>- [Long Melancholy Interlude] this works about half the time<br>- [Short Accelerating Interlude] the system tends to prefer short instrumentals anyways</p><p>Don't get too creative with the modifier tags. For instance, genre-specific modifiers don't really seem to work like <code>[Psychedelic Interlude]</code> even though it makes sense to us, the system doesn't seem to recognize it. However, there is another option we have to modify all instrumental sections, and that is periods and exclamations to try and shape the pacing.</p><p>\```<br>[Melodic Interlude]<br>. . . ! . .<br>. ! . . . !<br>\```</p><p>\```<br>[Intense Interlude]<br>!! . ! !! !<br>!! !! ! !!<br>\```</p><p>And so on. You can arrange the . and ! in any way you like to convey the rhythm and such. </p><h4>[Break]</h4><p>Break is strictly instrumental often defaults to one measure or phrase, and can be used quite frequently. Almost no modifiers work here, and it has the most impact on the song when wedged between verses and choruses. What does tend to work is specifying the instrument to lead during the break:</p><p>- [Violin Break]<br>- [Drum Break]<br>- [Scream Break]<br>- [Lead Guitar Break]<br>- [Bass Guitar Break]</p><p>So basically you can use it like a small solo. The rhythem modifiers are generally totally ignored here. </p><h4>[Movement]</h4><p>This is an experimental tag, but <em>might</em> help the engine transition to a new movement. </p><p>- [Begin Psychedelic Movement]<br>- [Transition to Faster Harder Movement]<br>- [Long Orchestral Movement]</p><p>The system is liable to totally ignore this tag but it's worth a shot. </p><h4>[Instrumental]</h4><p>General purpose tag to break up a song. Can be used on its own, unmodified. Often used in conjunction with other tags or modifiers (see rest of doc)</p><h4>[Solo]</h4><p>This tag is pretty much exactly what you'd expect. It pairs well with [Interlude] and does best when you specify the instrument, pace, and energy. </p><p>- [Soaring Lead Guitar Solo]<br>- [Fast and Intense Drum Solo]<br>- [Dancing Fiddle Solo]<br>- [Playful Flute Solo]<br>- [Finger Style Guitar Solo]</p><p>As you can see, this kind of pattern tends to do best with the solo. It all generally comes back to instrument, pace, emotion/energy. One thing to note is that chaining [Interlude] and [Solo] is often the best way to change the movement or overall tone of a song. The exclamation and period modifiers sometimes work on solos, but the system often just goes with the vibe. </p><h4>[Build]</h4><p>This tag is often less effective than interlude or solo, and is often treated like a break. Period and exlcamation rhythms seem to have relatively little effect, though it seems to have a decent effect when sandwiched between verse and chorus (e.g. to build to a soaring chorus) but that can be redundant. This should only be used when it really makes sense for the song, and probably only once. In most cases, a break, solo, or interlude should be used, that's how specific this use case is. </p><h4>[Bridge]</h4><p>The system doesn't seem to know what to do with bridges. It often just treats them like a verse or a chorus, sometimes a refrain. Use them sparingly, at most once per song. </p><p>- [Instrumental Bridge] - this seems to have the most impact or be most useful</p><h4>[Outro]</h4><p>This tag seems to work if it's either instrumental or vocal, and can be treated pretty flexibly. It's primary purpose is to tell the system to start preparing for the end of the song and should be used exactly once. </p><p>- [Long Fading Outro]<br>- [Urgent Loud Outro]<br>- [Mournful Outro]</p><p>Like many such tags, emotion + pace seems to work well. This tag should only be used once near the end to cue the generator to start winding the song down. </p><h4>[End]</h4><p>As you'd expect, this generally tells the system to end the song.</p><p>- [Fade to End]<br>- [Lingering End]<br>- [End Resolves to Whispers]</p><p>You can play with this tag some, but generally it just serves as a standalone tag. Some of the same modifiers here can work for the outro tag as well. </p><h3><strong>Vocal Tags</strong></h3><p>Generally speaking, the style of the music (not specified in this document) will dictate the voice, which is generated automatically, however, in songs where the vocals change significantly the song will honor it. </p><p>- [Spoken Word Narration]<br>- [Telephone Call]<br>- [Female Opera Singer]<br>- [Swanky Crooning Male]<br>- [Ethereal Female Whisper]</p><p>These tags can be used in lieu of verse or chorus tags and there can be a lot of flexibility, as these sorts of tags appear in the training data and significantly modify how the song is delivered. </p><p>\```<br>[Spoken Word Narration]<br>*static* ...final log... coordinates unknown...<br>...oxygen critical... systems failing...<br>...tell earth we made it... we saw such beautiful things...<br>...orion spur expedition... signing off... <em>static</em><br>\```</p><h3><strong>Instrument Tags</strong></h3><p>You can also do somewhat the same with specific instruments. This can serve in lieu of solo or as part of a solo</p><p>\```<br>[Sad Trombone]<br>waah-Waaah-WAAH<br>\```</p><p>\```<br>[Chugging Guitar]<br>chuka-chuka-chuka-chuka<br>\```</p><p>\```<br>[Overblown Flute]<br>\```</p><p>\```<br>[Trilling Pennywhistle]<br>\```</p><h4><strong>Simple Example</strong></h4><p>This is a pretty minimalist example which worked really well. The system will fill in a lot of gaps, so you can see you really don't need much. Less is often more, particularly if the STYLE is well defined. </p><p>\```<br>[Verse]<br>Sun beats down hard dry road<br>Dust devils dance shadows long<br>Heat waves twist in gold<br>Mirages fade now gone</p><p>[Chorus]<br>Lost in the wasteland void<br>Echoes of time destroy<br>Lost in the desert sand<br>Seeking the promised land</p><p>[Verse 2]<br>Cactus stands alone silent guard<br>Hawks circling overhead far<br>Bleached bones in the arid yard<br>Searching for a falling star</p><p>[Bridge]<br>Time drips slow never ends<br>Mind’s eye bends and bends<br>Vultures fly high in the sky<br>Dreams of rain make me cry</p><p>[Chorus]<br>Lost in the wasteland void<br>Echoes of time destroy<br>Lost in the desert sand<br>Seeking the promised land</p><p>[Verse 3]<br>Night falls cool winds rise<br>Stars blaze across the skies<br>Desert whispers truth and lies<br>In the silence spirit flies<br>\```</p><h4><strong>Intermediate Example</strong></h4><p>Below is an example of the above song but with a bit more control over the flow. </p><p>\```<br>[Long Instrumental Intro]</p><p>[Verse]<br>Sun beats down hard dry road<br>Dust devils dance shadows long<br>Heat waves twist in gold<br>Mirages fade now gone</p><p>[Chorus]<br>Lost in the wasteland void<br>Echoes of time destroy<br>Lost in the desert sand<br>Seeking the promised land</p><p>[Lead Guitar Solo]</p><p>[Verse 2]<br>Cactus stands alone silent guard<br>Hawks circling overhead far<br>Bleached bones in the arid yard<br>Searching for a falling star</p><p>[Bridge]<br>Time drips slow never ends<br>Mind’s eye bends and bends<br>Vultures fly high in the sky<br>Dreams of rain make me cry</p><p>[Build]</p><p>[Ensemble Chorus]<br>Lost in the wasteland void<br>Echoes of time destroy<br>Lost in the desert sand<br>Seeking the promised land</p><p>[Melancholy Outro]</p><p>[Verse 3]<br>Night falls cool winds rise<br>Stars blaze across the skies<br>Desert whispers truth and lies<br>In the silence spirit flies</p><p>[Fade to End]<br>\```</p><h4><strong>Complex Example</strong></h4><p>Below is one of the most sophisticated songs that worked well, including multiple movements. </p><p>\```<br>[intro]<br>. . . ! . .<br>. . ! . . .</p><p>[build]<br>. ! . . ! .<br>! . ! . ! !<br>! ! . ! ! !</p><p>[verse]<br>engines burning bright and strong<br>breaking free from earthly bonds<br>through the atmosphere we climb<br>leaving all we knew behind</p><p>[break]</p><p>[chorus]<br>beyond the orion spur<br>where no one's gone before<br>beyond the orion spur<br>ten thousand worlds explore</p><p>[break]<br>. . . ! . .<br>. ! . . ! .</p><p>[ verse]<br>hyperdrive ignition flows<br>new stars glowing as we go<br>ancient light guides us here<br>through the void without fear</p><p>[interlude]<br>. ! . . . !<br>. . . ! . .</p><p>[verse]<br>cosmic winds carry us far<br>past the light of dying stars<br>through the gates of space and time<br>leaving known space far behind</p><p>[solo]<br>! . . ! . .<br>! . ! . ! !</p><p>[bridge]<br>warning lights begin to flash<br>systems failing coming crash<br>alien world draws us near<br>atmosphere of cosmic fear</p><p>[break]<br>! ! . . ! !<br>! . ! ! . !</p><p>[verse]<br>toxic clouds below our wings<br>alien horrors this world brings<br>must escape this deadly sphere<br>but our engines disappear</p><p>[break]<br>. . . ! . .<br>. . . . ! .</p><p>[chorus]<br>drifting through the starlit deep<br>further than our maps can reach<br>signals fading into night<br>earth has vanished from our sight</p><p>[solo]<br>. . ! . . .<br>. . . ! . .</p><p>[verse]<br>oxygen running so low<br>our final moment to know<br>that we flew too far too fast<br>beyond where our fate was cast</p><p>[break]<br>. . . ! . .<br>. ! . . . .</p><p>[chorus]<br>beyond the orion spur...<br>where no return is sure...<br>beyond the orion spur...<br>forever we endure...</p><p>[spoken word narration]<br>*static* ...final log... coordinates unknown...<br>...oxygen critical... systems failing...<br>...tell earth we made it... we saw such beautiful things...<br>...orion spur expedition... signing off... <em>static</em></p><p>[beeping carrier signal]<br>. . . !<br>. . !<br>. !<br>.</p><p>[slow fade]<br>. . .<br>. .<br>.</p><p>[fade to end]<br>\```</p><h3>Styles</h3><p>Styles are limited to 120 characters total and should not be included in the song, this is crafted separately</p><p>The system accepts a separate STYLE tag that is a simple comma separate list of genres and modifiers. Interestingly, commas are not necessary and you can get some really interesting hybrid styles without them. Here's one of my most successful examples:</p><p>- stoner space rock shoegaze slow build epic crescendos psychedelic riffing soaring solos pensive interludes long intro</p><p>However, the system tends to work better with commas separating the distinct genres and modifiers:</p><p>- space rock, stoner rock, slow build, epic crescendos, psychedelic riffing, soaring solos, pensive interludes, shoegaze</p><p>It should be noted that both of these are slightly outside of best practices, as it includes modifiers for solos and interludes, which can be specified inside the song itself. </p><p>- space rock, psychedelic rock, desert rock, stoner rock, shoegaze</p><p>Simply creating a list of genres tends to work extremely well, almost like a taxonomy. In this case, space rock provides the most influence, which each subsequent modifier having less and less influence. </p><p>- witchpop, electro swing, eerie<br>- witchpop, house, hypnotic, dreamy, eerie<br>- Acoustic, Desert, Nubidian, Acoustic nu-metal,<br>- Hurdy-gurdy, dark, scary, otherwordly</p><p>You can also focus on emotive modifiers. These tend to work better.</p><p>- witchpop, witchrock, folk, violin, acoustic, eerie, mysterious, clean vocals, classically trained<br>- neofolk, celtic, dance, celebratory, orchestral<br>- Electronic, sweet female voice, eerie, swing, dreamy, melodic, electro, sad, emotional<br>- Folkmetal, Folk, Metal, Hurdy-Gurdy, Hand-Organ, Gaelic Woman, Female, Beautiful, Top 40, English Lyrics<br>- New Age, Celtic, Slow, Celtic Harp, Piano, Flute, ethereal female vocals, atmospheric<br>- Medieval Folk, Neofolk, Pagan Folk, German Folk, European folk, neoclassical music, ethereal music, darkwave, Folk Dance (sounds a lot like Faun or Celtic Woman)</p><p>So in general the things that work best when constructing styles are:</p><p>- Genre(s) - one or more genres in sequence<br>- Emotions - one or more "vibes" to go with<br>- Instruments - violins, handpans, orchestras, etc (particularly if it's either not obvious or guaranteed from the genre)<br>- Vocal styles - opera, growling, etc</p>]]></description><guid isPermaLink="false">316</guid><pubDate>Sat, 28 Jun 2025 21:18:16 +0000</pubDate></item><item><title>How does AI image generation work?</title><link>https://www.throneofgeeks.com/blogs/entry/300-how-does-ai-image-generation-work/</link><description><![CDATA[<p>
	AI image generation, often referred to as generative modeling, involves using artificial intelligence algorithms, particularly deep learning techniques, to create images that mimic real-world data or produce entirely new imagery. One of the most popular and powerful techniques for AI image generation is Generative Adversarial Networks (GANs), proposed by Ian Goodfellow and his colleagues in 2014.
</p>

<p>
	Here’s a simplified explanation of how GANs work:
</p>

<p>
	<strong>Generator Network</strong>: This is the AI component responsible for creating images. It starts with generating random noise as input and progressively refines it to produce images that become increasingly similar to the training data.
</p>

<p>
	<strong>Discriminator Network</strong>: This acts as the “critic” or “judge” in the process. It evaluates images produced by the generator and attempts to distinguish them from real images from the training dataset.
</p>

<p>
	<strong>Training Process</strong>: Initially, the generator produces random images, and the discriminator makes guesses about whether they are real or fake. Based on the feedback from the discriminator, the generator adjusts its parameters to produce more realistic images. At the same time, the discriminator is also trained to improve its ability to distinguish real from fake images.
</p>

<p>
	<strong>Adversarial Nature</strong>: The name “Generative Adversarial Network” comes from the adversarial relationship between the generator and the discriminator. As the generator improves, the discriminator must also improve to maintain its ability to distinguish real and fake images. This competition drives both networks to improve over time.
</p>

<p>
	<strong>Convergence</strong>: Ideally, the training process continues until the generator produces images that are indistinguishable from real ones, and the discriminator can no longer differentiate between real and fake images.
</p>

<p>
	Other approaches for AI image generation include Variational Autoencoders (VAEs), which learn a probabilistic distribution of the input images and can generate new images by sampling from this distribution. Auto-regressive models like PixelCNN and PixelRNN generate images one pixel at a time, conditioning each pixel on previously generated pixels.
</p>

<p>
	These techniques have found applications in various domains, including art generation, image editing, data augmentation, and even in generating realistic synthetic images for training other AI models.
</p>
]]></description><guid isPermaLink="false">300</guid><pubDate>Tue, 29 Oct 2024 18:22:15 +0000</pubDate></item><item><title>How to Increase WordPress Memory Limit</title><link>https://www.throneofgeeks.com/blogs/entry/299-how-to-increase-wordpress-memory-limit/</link><description><![CDATA[<p>
	Below, we’ll feature four different methods you can try to increase the WordPress memory limit.
</p>

<p>
	Unfortunately, we can’t guarantee that all of these solutions will work for you because it depends in large part on how your hosting environment is configured. That’s why we shared four different methods – at least one of these should work for your situation.
</p>

<p>
	Because some of these fixes involve editing core WordPress files, we highly recommend that you take a backup of your site before proceeding.
</p>

<p>
	Once you have a recent backup of your site, here are some ways to increase the WP memory limit.
</p>

<h2>
	Edit wp-config.php and increase wp_memory_limit
</h2>

<p>
	If you’re running low on memory, there’s a simple solution: increase the amount of memory you have access to! WordPress lets you set the current memory limit in its wp-config.php file via the wp_memory_limit variable. However, this limit may be less than the amount of memory provided with your hosting plan.
</p>

<p>
	If this is the case, you may be able to resolve this error by editing your wp-config.php file. Making changes at the code level does carry a degree of risk, so it’s smart to create a backup of your site before proceeding.
</p>

<p>
	To edit the wp-config.php file, you’ll need to connect to your server via File Transfer Protocol (FTP) using an FTP client such as FileZilla.
</p>

<p>
	navigate to your site’s root folder and;
</p>

<p>
	Inside this folder, right-click on the wp-config.php file, and select View/Edit. This opens wp-config.php in your default text editor. Now, search for the following phrase – WP_MEMORY_LIMIT.
</p>

<p>
	It might look something like this:
</p>

<pre class="ipsCode prettyprint lang-html prettyprinted" id="ips_uid_253_8" style=""><span class="pln">define( 'WP_MEMORY_LIMIT', '32M' );</span></pre>

<p>
	If this code already exists in your wp-config.php file, you need to increase the number. For example, you can increase it from 32M to 256M.
</p>

<p>
	If you don’t see this line of code in the file, you’ll need to add it. Just add the following code above the line that says /* That’s all, stop editing! Happy publishing. */:
</p>

<pre class="ipsCode prettyprint lang-html prettyprinted" id="ips_uid_253_10" style=""><span class="pln">define( 'WP_MEMORY_LIMIT', '256M' );</span></pre>

<p>
	There you go. There are many more ways to do this but I just wanted to share the method that worked for me.
</p>
]]></description><guid isPermaLink="false">299</guid><pubDate>Tue, 29 Oct 2024 17:58:14 +0000</pubDate></item><item><title>20 Best Stable Diffusion Prompts for Age</title><link>https://www.throneofgeeks.com/blogs/entry/298-20-best-stable-diffusion-prompts-for-age/</link><description><![CDATA[<h2>
	Note
</h2>

<p>
	Note: Some prompts are not tried enough and might provide inconsistent results. This whole tutorial is made with the help of the Reddit <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> community and how they used age prompts while using <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr>. Make sure to tweak some parameters accordingly.
</p>

<h2>
	Step 1: Identify Your Subject
</h2>

<p>
	Begin your age specification by determining who the subject of your project is.
</p>

<p>
	The individual can be absolutely anyone from celebrities, family members, or even characters from your favorite books.
</p>

<p>
	Be sure to have a clear mental image of your subject. The initial prompt is your starting point, it’s your opportunity to establish the foundational idea you want the <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> model to build upon.
</p>

<h2>
	Step 2: Specifying the Age Range
</h2>

<p>
	Once you have your subject in mind, the next step is to articulate the age range that you wish to represent.
</p>

<p>
	This is a crucial step because age significantly influences how we perceive individuals.
</p>

<p>
	In your prompt, you’ll need to use “age XX” where XX represents the lower limit of the desired age range. This could be anything from 10, 20, 30, etc., based on your preference.
</p>

<p>
	For example:
</p>

<ul>
	<li>
		“newborn” for &lt; 3 yrs
	</li>
	<li>
		“child” for &lt; 10 yrs
	</li>
	<li>
		“teen” to reinforce “age 10”
	</li>
	<li>
		“college age” for upper “age 15” range into low “age 22” range
	</li>
	<li>
		“youthful adult” reinforces “age 25” range into middle “age 35” range
	</li>
	<li>
		“middle age” for upper “age 40” range into lower “age 60” range
	</li>
	<li>
		“grandmother/grandfather” for “age 55” on up
	</li>
</ul>

<p>
	The <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> model will then use these terms to align the generated images with your defined age range.
</p>

<p>
	Another way to influence the age range of your results is by employing a negative prompt, essentially telling the model what you don’t want to see.
</p>

<p>
	This could include similar age-related terms that you want to exclude from your results, further tightening your age range.
</p>

<p>
	On the other hand, you can also use age: desired age parameter to define the age you want your character to look like in <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> and most often than not, it works wonderfully. For example, age:52 or age:30.
</p>

<h2>
	Step 3: Using Specific Year References
</h2>

<p>
	Sometimes, specifying an age range might not be enough, especially when dealing with public figures whose appearances have evolved over time.
</p>

<p>
	To address this, consider using specific year references in your prompt, such as “ in the year 1995”.
</p>

<h2>
	Step 4: Refining with Additional Details
</h2>

<p>
	If your subject is tending to appear older than desired, you can use further specifications to bring their age down.
</p>

<p>
	For instance, the term (teen:1.3) in your prompt combined with terms like (child, toddler, infant, cherub) in your negative prompt can tilt the results towards a younger representation.
</p>

<p>
	Also, to exclude facial hair that could potentially age your subject, terms like (beard, mustache, stubble:1.2) can be added to your negative prompt.
</p>

<h2>
	Step 5: Experiment
</h2>

<p>
	Machine learning and AI are built upon iterative refinement and experimentation. Don’t be afraid to play around with different combinations of age specifications, descriptors, negative prompts, and year references.
</p>

<p>
	Sometimes the most effective results come from an unexpected combination of factors.
</p>

<p>
	Best Prompts to Try out In <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> for Specifying Age<br>
	Remember, these prompts are just samples to show how we used age parameters in <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> and got close results.
</p>

<ul>
	<li>
		“Generate an image of Taylor Swift, age:18, with a vibrant country-style outfit, inspired by pop art, high definition, sharp lines, colorful palette.”
	</li>
	<li>
		“Render an image of Keanu Reeves, age: 25 years old, donning a chic black suit, in a noir-inspired monochrome art style, high-resolution, with clear and well-defined features.”
	</li>
	<li>
		“Create an anime-styled representation of Harry Potter at age:11 years old, detailed and vibrant, inspired by Studio Ghibli’s art style, sharp, clean lines, high-resolution.”
	</li>
	<li>
		“Visualize a 25-year-old Audrey Hepburn in a Roman Holiday style outfit, classic monochrome Hollywood aesthetic, detailed and refined art, high-resolution.”
	</li>
	<li>
		“Generate an image of a 30 year old Steve Jobs, dressed in his iconic black turtleneck and jeans, in a modern minimalist style, sharp and clear lines, high resolution.”
	</li>
	<li>
		“Create a full-body portrayal of a jubilant Ana de Armas at age:28, detailed anime realism, trending on Pixiv, minute detailing, sharp and clean lines, award-winning illustration, 4K resolution.”
	</li>
	<li>
		“Visualize Albert Einstein at age 50 in a pop art style with bold and vibrant colors, detailed and high-resolution image.”
	</li>
	<li>
		“Generate a portrait of a 35-year-old Leonardo DiCaprio, inspired by Vincent Van Gogh’s impressionist style, vibrant color usage, intricately detailed, high-resolution.”
	</li>
	<li>
		“Render a full-body image of a 20-year-old Serena Williams, in action, capturing the energy of her game, detailed realism, high-resolution, vibrant color palette.”
	</li>
	<li>
		“Generate an image of a 10-year-old Hermione Granger, in an anime style inspired by ‘My Hero Academia’, vibrant and colorful, high-resolution.”
	</li>
</ul>

<p>
	That’s it, you’re now ready to use age parameters in your <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> prompts. But as we mentioned, some versions of <abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt."><abbr title="Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is considered to be a part of the ongoing AI Spring. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.">Stable Diffusion</abbr></abbr> might not support these parameters.
</p>

<p>
	I would love if you could share your tips in the comments below and help the community.
</p>
]]></description><guid isPermaLink="false">298</guid><pubDate>Tue, 29 Oct 2024 17:54:28 +0000</pubDate></item></channel></rss>
