Doing Perchance.org things…

Links

  • 35 Posts
  • 342 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle




  • Please see this generator to see each effect of the ‘text’ that on the prompt.

    I’ve separated the text into sections, to see the effect of adding that text to the generated image. All images use the same seed, so that it would have the (almost) same noise to generate from. With or without your provided photo (on my device), the result was still the same.

    Some major effect on the image generated is the addition of the facial hair text on the prompt.

    The more interesting thing would also be I could adjust the weight of it to 1:5 and the effect would be more significant

    Yes, if you increase the ‘emphasis’ of a tag, the AI would increase the significance of that text in the image. See Tag Emphasis - Prompting Guide for a demo.

    Based on your response to me before, it seems the image actually changes and poisons the image pool.

    If you use popular people’s names (Trump, Elvis, etc.) the images generated wouldn’t be as accurate/realistic. The reason might be to prevent ‘deep fakes’ which may be used to impersonate/falsify information (this is not confirmed since we don’t know the specific model used in the images and we don’t know the images/dataset used to train it).

    Can you also provide the prompt of the generated image? Click the ℹ on the top left to open the prompt and negative prompt of the generated image like so:


  • It might just be a coincidence that the prompt specifically gives that image based on that file path. And as I said, if the page was able to open it up, it would have added cat features to it regardless if it has the ‘a man referred to’ as a starting prompt since it should’ve been able to infer the image was a cat and it should add cat features since it is a reference to it. I would suggest not to post any personal info for privacy.

    Also it doesn’t train on the spot. It is already trained and it would just output what it has been trained on. And as you said, you added the a man referred to ... to determine the following traits of char1: facial structure, head shape, proportions, colors, detail, moles, hair style, facial hair, eye shape, health, hair line to emphasize that it was a man, and with these texts it would more likely that the images would lean into it.


  • It is just added as a text in the prompt: Here, I’m using an image of a kitten as the input on the filepath. The filepath was /storage/emulated/0/DCIM/Camera/IMG_20231219_100845102.jpg

    And as you can see, by clicking on the (i) button on the top left, you can open the prompt that is used to generate the image.
    Of course the image would be different since there are different text on the prompt, also images are different due to the seed. But, it should’ve had a cat features, or cat related images since the file path was pointing to a cat image. So it cannot open the image and infer to it. It is only a text-to-image not image-to-image.

    Here is with the same filepath that you have: /internal storage/dcim/perchance/Me.jpg which I don’t have on my device:

    Looking at your prompt:

    , ,a man referred to as "char1". (char1 is the person in the local image "/internal storage/dcim/perchance/Me.jpg": 1.3). reference "/internal storage/dcim/perchance/Me.jpg" to determine the following traits of char1: facial structure, head shape, proportions, colors, detail, moles, hair style, facial hair, eye shape, health, hair line

    Since you have the words facial structure, head shape, proportions, colors, detail, moles, hair style, facial hair, eye shape, health, hair line the AI would try to generate an image with those words.

    With the text (char1 is the person in the local image "/internal storage/dcim/perchance/Me.jpg": 1.3) it would have an increased emphasis by 1.3 on the generated image. So, if you have a filepath with samus on it, the AI would greatly emphasize those words, meaning it would appear emphasized or will appear most likely on the image.

    The file path is just treated as a text input.





  • I’ve tried testing it with [console.log(com), update(), console.log(com), ''] upon first load of the page and update essentially removes the com variable since it cannot update the comments instance.

    Based on the note on the comments-plugin page:

    Note: By default comments areas aren’t “updated”/randomized when the user clicks the “randomize” button. Instead the comments area will “stay there”. If you actually want to display a random channel each time the user clicks a button, or you e.g. want to put a different comments box in each “room” within your goto-plugin adventure, then you need to add replacedDuringUpdate=true like in this example generator. That ensures that the old comments section is “deleted” and replaced by a fresh one every time the page is updated/randomized.

    With the replacedDuringUpdate=true option, you can then update the comments with it and the com would still be accessible. Otherwise, you need to set the comments instance in the com again if you update the page.





  • You can possibly use the API that retrieves the generator’s stats for the generator cards. Though you need to make the function an async function.

    async loadUserDataset() => // Need to change it to an Async function to wait for the data to be retrieved.
      ...
      if(userDataset != "") {
        let result = await fetch("/api/getGeneratorStats?names="+userDataset).then(r => r.json()); // Gets the Generator Stats
        let gens = result.data; // Returns an Array of Generator
        if (gens.length > 0) { // If there is data
          document.userDataset = userDataset;
        } else { // if not
          alert('Perchance Page Not Found');
          document.userDataset = 'template-fusion-generator-dataset' // or any default pages
        }
      }
      ...
      update() // to refresh the display, or you can use element.innerHTML = this new value
    



  • Since you are using a document.getElementById(...).click() on the button with the openTab HOWEVER, the openTab function is not yet defined since it is under the click() which is why it throws that error. So you just need to use the click() below the declaration of the function. I would also not use this as an Id of an element since this has a different use in JavaScript.

    <script>
      function openTab(evt, tabName){
        var i, tabcontent, tablinks;
        
        tabcontent = document.getElementsByClassName("content");
        for (i = 0; i < tabcontent.length; i++) {
          tabcontent[i].style.display = "none";
        }
        
        tablinks = document.getElementsByClassName("topbtn");
        for (i = 0; i < tablinks.length; i++) {
          tablinks[i].className = tablinks[i].className.replace(" active","");
        }
        document.getElementById(tabName).style.display = "block";
        evt.currentTarget.className += " active";
        
      }
      
      document.getElementById("TabOneBtn").click(); // click after declaring the function, changed the id of the button from 'this' to 'TabOneBtn'
    </script>
    

    Another possible way to fix your default is to probably just add the active class on the default one, and no need to click the button to set the default.