Prithiv Sakthi PRO
AI & ML interests
Articles
Organizations
prithivMLmods's activity
The demo for the respective trials is :\
- prithivMLmods/FLUX-REALISM
- prithivMLmods/FLUX-ANIME
Model :\
- prithivMLmods/Canopus-LoRA-Flux-FaceRealism
- prithivMLmods/Canopus-LoRA-Flux-Anime
Dataset:\
- prithivMLmods/Canopus-Realism-Minimalist
- https://4kwallpapers.com
I think a concrete example would help HF deal with this. (I hope you don't mean me...π€’)
Hi!! @John6666 , I didn't mean you at all. I was referring to people posting 'Not Safe for Work' content / link attached to the Spaces. [They even countered by arguing that it was a Hugging Face feature.]
I frequently see random people duplicating top-trending spaces and promoting illegal ads and unethical activities with links to sites. It would be beneficial to restrict these actions by issuing warnings when they attempt to commit or upload files. [ PS: I still come across it. ]
Additionally, implementing chat support within Hugging Face would be valuable. This feature could provide knowledge and guidance for those who are just starting to build today, helping them navigate the platform and use its tools effectively.
[ Activity Overview ] for Users.
GrabDoc V | GrabDoc | Type Byte | SD3 CLI
- GrabDoc V: prithivMLmods/GRABDOC-V
- GrabDoc: prithivMLmods/GRAB-DOC
- Type Byte: prithivMLmods/Type-Byte
- SD3 CLI: prithivMLmods/SD3-CLI
πΊSpace: prithivMLmods/STABLE-IMAGINE
βοΈThe specific LoRA in the space that requires appropriate trigger words brings good results.
π Articles: https://huggingface.co/blog/prithivMLmods/lora-adp-01
**Description and Utility Functions **
β Most likely image generation
βοΈ Most accurate trigger words expected
β Each designed to capture different artistic elements
βοΈ Specialized styles and characteristics
β Flexible to design what is needed (keyword-centric)
βοΈ Increasing productivity
π«Repository: https://github.com/prithivsakthiur/gen-vision
πColab Link: https://colab.research.google.com/drive/1axA0pU--32t4a8AHiVlt6zyl8gRfiXKs
*οΈβ£Notebook: prithivMLmods/STABLE-IMAGINE
lora_options = {
"Realism (face/character)": ("prithivMLmods/Canopus-Realism-LoRA", "Canopus-Realism-LoRA.safetensors", "rlms"),
"Pixar (art/toons)": ("prithivMLmods/Canopus-Pixar-Art", "Canopus-Pixar-Art.safetensors", "pixar"),
"Photoshoot (camera/film)": ("prithivMLmods/Canopus-Photo-Shoot-Mini-LoRA", "Canopus-Photo-Shoot-Mini-LoRA.safetensors", "photo"),
"Clothing (hoodies/pant/shirts)": ("prithivMLmods/Canopus-Clothing-Adp-LoRA", "Canopus-Dress-Clothing-LoRA.safetensors", "clth"),
}
.
.
.
for model_name, weight_name, adapter_name in lora_options.values():
pipe.load_lora_weights(model_name, weight_name=weight_name, adapter_name=adapter_name)
pipe.to("cuda")
Hi @AleksPokd , I will work on the idea after completing my ongoing stuffs.
Thankyou for the idea, we might collab π
ποΈ Space: prithivMLmods/IMAGINEO-4K
π Tried the Duotone Canvas with the image generator. Unlike the duotone filter in the Canva app, which applies hue and tints in RGBA, this feature applies duotones based purely on the provided prompt to personalize the generated image.
π These tones also work with the gridding option, which already exists in the space.
π The application of tones depends on the quality and detail of the prompt given. The palette may be distorted in some cases.
πIt doesn't apply like a hue or tint in RGBA (as shown in canva app below); it is purely based on the prompts passed.
ποΈ Check out the space: prithivMLmods/IMAGINEO-4K
ποΈCollection: https://huggingface.co/collections/prithivMLmods/collection-zero-65e48a7dd8212873836ceca2
huggingface.co/spaces/prithivMLmods/IMAGINEO-4K
ποΈWhat you can do with this space:
β Compose Image Grid
ππ» "2x1", "1x2", "2x2", "2x3", "3x2", "1x1"
β Apply styles
β Set up Image tones
β Apply filters & adjust quality
.
.
.
Thanks for reading!
- @prithivMLmods
Hi @AleksPokd .
Can you describe this fully "" it would be fantastic to consider adding the Image Reference or Character Reference feature"". So it would be better to move further.. text-img- to - img-img ??
It is derived from the base models of SDXL 1.0, JuggernautXL, and Epic Realism models with Auto Labeling Algorithms --wd-v1-4-vit-tagger-v2-- and --wd-v1-4-convnext-tagger-v2--, or you can manually do the batch labeling using Automatic1111 on RunPod, Tensor Arts, and more. There is no specific training for diffusers.
Regarding this question: Is it possible to extract the text model to apply the deltas to the Mistral model? Did they use a Mistral base, as it has 32 layers (31 transformer layers plus the LM head)?
--Transferring fine-tuned results from one model to another can be done, but I have no idea about the text model to apply the deltas to the Mistral model. Did they use a Mistral base, as it has 32 layers (31 transformer layers plus the LM head)?
---If you have any research about it ?
What Lol??
Hhoo !!, Okay Man ... .ππ»
Have a Great Day !!
Hmm commercially you were right βοΈ, different approaches arise from different strategies. The approach of using more advanced, cost-effective cloud solutions for deployment aligns well with many development practices. It's essential to balance performance needs with cost considerations to optimize your AI model deployment strategy. Let the environment will decide !!
PS: please remove the ** Not-For-All-Audiences** from your recent models, if it is not really belongs NFAA / NSFW @LeroyDyer , because it may not reach the correct people for future enhancements. If it is really belong to there leave as it is.
Colab has always been great, no backwords.
πΊLive Space : prithivMLmods/YOLO-VIDEO , Duplicate the Space to avoid queuing issues.
πΊT4 Colab : https://colab.research.google.com/drive/1BKgFUfk2Me1cSPFmbtZSVCn_4cYImPO-?au
ππ»For HPC, use A100/T4 under controlled conditions.
ππ»Speed Estimation, Object Counting, Distance Calculation, Workout Monitoring, Heatmaps,etc.
Ultralytics dropped the YOLOv8 - #Ultralytics 8.2.51 π₯, YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
π https://pypi.org/project/ultralytics/8.2.58/
πMore Features You can try:
β Classes selection support added
β Live FPS display in the sidebar
β Webcam and video support added
β Confidence and NMS threshold option to modify.
β Segmentation, detection, and pose models support added.
πUltralytics Live inference: https://docs.ultralytics.com/guides/streamlit-live-inference/
from ultralytics import solutions
solutions.inference()
### Make sure to run the file using command `streamlit run <file-name.py>`
β‘yolo streamlit-predict
ππ»Advantages of Live Inference
βοΈ Seamless Real-Time Object Detection: Streamlit combined with YOLOv8 enables real-time object detection directly from your webcam feed. This allows for immediate analysis and insights, making it ideal for applications requiring instant feedback.
βοΈEfficient Resource Utilization: YOLOv8 optimized algorithm ensure high-speed processing with minimal computational resources.
πUltralytics feature Models: https://docs.ultralytics.com/models/, Ultralytics new Solutions: https://docs.ultralytics.com/solutions/
ππ»Official Documentation:
Ultralytics YOLOv8 Documentation: Refer to the official YOLOv8 documentation for comprehensive guides and insights on various computer vision tasks and projects. π https://docs.ultralytics.com/
@GeorgeosDiazMontexano It wasn't like that, sir. Since A100 and T4 are performance and acceleration-centric GPUs, the usability of GPUs has always been tied to their price tags. However, in Hugging Face, you can use A100, T4, and upgraded CPUs. If you need to build something performance-centric, you need GPUs, right? In that case, paying for a Pro Subscription with zero GPUs (HPC) will definitely be useful per month compared to other resource costs per hour.
All the best,
PrithivSakthi
Hi! @GeorgeosDiazMontexano ,
It means that GPU allocation is dynamic and varies per user, with each visitor/user having a quota that gradually resets over time.
So the A100 or other GPU error you are facing will reset after the time limit shown in the error message.
Some spaces consumes a high amount of GPU resources and may lead to the depletion of your GPU quota.
Hi @ezzdev This was a demo space for the computer vision models. You can use images in your project. Generating images out of ethicalness is subject to your own risk ( not safe for work ).
Three of them with Tensor Art & remaining with the SD in RUNPOD. Using it with the appropriate base model will give the good results. But it was in initial training stage, better may be come on future ..
πI have just created a step-by-step procedure with a Colab demo link also attached in the repository's README.md.
π: https://github.com/prithivsakthiur/how-to-run-huggingface-spaces-on-local-machine-demo
Thanks for a read !!
Models:
SG161222/RealVisXL_V4.0
stabilityai/sdxl-turbo
SG161222/RealVisXL_V2.0
runwayml/stable-diffusion-v1-5
Corcelio/mobius
fluently/Fluently-XL-Final
These are the interesting models you can make use of it, Text-2-Image Gen.
If you want to run the hugging spaces in your local machine, stay tuned on the repo https://github.com/PRITHIVSAKTHIUR/How-to-run-huggingface-spaces-on-local-machine-demo, i will update the step by step process there asap !!.
-Thankyou
You have mentioned, you have experience with JuggernautXL, Rav Animated, SDXL, Realistic Vision, DreamShaper, and Lora. to generate images in Local & also mentioned GTX 3060 you hold for processing / accelerating !!.
Collection Zero is Zero GPU Nvidia A100 HCP Running Spaces & DALLE 4K, MidJourney are the Quick Names, i had just keep it for an Trend.
βKindly Please provide Clearly, What i need to do for you / Guide you. Since you had mentioned experiences with Automatic 11111 or ComfyUI on your my local Hardware....
So You trying to run spaces in your local hardware / something else ??.
Hi @mk230580 !!
You are asked for the image that has high res quality with the fast computation, for your instance i came up with the idea in acceleration of GPU T4, Yes we know NVIDIA A100 TC GPU is unmatched in its power and highest- performing elastic computations (HPC) tasks. Apart from that you can use T4 as hardware accelerators. You asked me how to run externally from hugging face right. Use T4 in Google Colab or any other work spaces that compatible of it. A100 is also available in Colab but you be a premium user.
Running in the local system same as follows
Just have the HF Token to passed for login...
--Authenticate with Hugging Face
from huggingface_hub import login
--Log in to Hugging Face using the provided token
hf_token = '---pass your huging face access token---'
login(hf_token)
Visit my colab space for an instances to run local out of HF
Hardware accelerator : T4 GPU
See we know, we can get A100, L4 there in colab for premium / for cost. T4 is free for certain amount of computation i went with it . In local hardware you know what to do...
Second thing: the amount details you have in prompt also will have the desired results. see the higher-end details prompts via https://huggingface.co/spaces/prithivMLmods/Top-Prompt-Collection or in freeflo.ai, prompthero for better details results.
Colab link ( example of the stabilityai/sdxl-turbo) :
https://colab.research.google.com/drive/1zYj5w0howOT3kiuASjn8PnBUXGh_MvhJ#scrollTo=Ok9PcD_kVwUI
You can use the various model like RealVisXL_V4.0, Turbo, & more for better results
** After passing the Access Token, Remove your token to share for others**
Hi Yasirkh,
Yes!, you can run any Python-based SDK in Google Colab with the appropriate model and its api_url by using the correct request library. For example, if you are trying to run a text-to-image model, you can do it with the inference API.
β οΈFor example:
import requests
API_URL = "----------your api addr goes here---------"
headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
--You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))
β οΈYou can find your access token on hf settings >> access token, replace it on this "headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}" instead of x
β οΈExample: headers = {"Authorization": "Bearer hf_ABC1234567890xyz9876543210"}, install the required PyPI Libs.
πThen add the Gradio blocks what you need to perform in the interface func().
π For your information, this is not the original MidJourney model; I have named the space "MidJourney" to perform the same work similarly. Make try & let me know you got it or else. And one more you cannot commit / push the code when the access tokens are visible use secret key or variables ( when in repos).
β οΈIf you feel any difficulties, you can reply me again ; i will sure help you with the logic or i will share the colab work link for make the case easier.
------ Try in Google Colab / Jupyter Nb / Data Spell / even in VsCode were you find the easier way ------
-Thank You !
This is the time to Share the Collection of Prompts which have high parametric details to produce the most detailed flawless images.
πYou can watch out the Collection on: prithivMLmods/Top-Prompt-Collection
π’More than 200+ High Detailed prompts have been used in the Spaces.
@prithivMLmods
Thank you for the read. !!
π¨Huggingface APK Update v0.0.4π¨
1. Fixed Pinch to Zoom Update .
2. Swipe Gestures.
3. Fixed Auto Rotate.
4. Updated app Indentifiers.
Download the app now !!
π¨Huggingface v0.0.4 Download,
β¬οΈLink : https://drive.google.com/file/d/1xEiH7LMdP14fBG-xDuSqKje5TRLV1PuS/view?usp=sharing
Like πShare π Follow π
πHuggingface for Android β‘οΈ
πͺΆMedian ( Go Native ) Plugin :
version 0.0.1
π prithivMLmods/Huggingface-Android-App