Instalación de LM Studio (Mac)
Ir al sitio oficial de LM Studio:
https://lmstudio.ai/
Para Mac descargar el archivo: LM-Studio-0.3.15-11-arm64.dmg
Doble clisk en al archo anterior para ejecutarlo y nos aparece una ventan donde arrastramos el ícono de LM Studio al ícono de Applications para que se pueda instalar.
Después ya podemos cerrar la ventana anterior y por medio del Finder podemos encontrar el "LM Studio" ó también lo podemos encontrar en nuestra carpeta de "Aplicaciones" y ahí le damos doble click para ejecutarlo.
Si es la primera vez que lo ejecutamos nos preguntará:
LM Studio es una app descargada de Internet. ¿Confirmas que quieres abrirla?
Y hacemos click en: -> Abrir
Nos aparece un botón verde que dice "Get your first LLM" y hacemos sobre él.
Nos recomienda un modelo, en este caso nos aparece:
Download your first local LLM
Start with a state-of-the-art local reasoning model
DeepSeek R1 Distilled (Qwen 7B) DeepSeek
DeepSeek R1 distilled into Qwen 7B: a powerful reasoning model in a small package
Y hacemos click en el botón que dice: "Download 4.68GB"
Tarda un tiempo en descargar el modelo seleccionado y alterminar quedará instalado el modelo en la siguiente ubicación (en Mac):
/Users/rogelioferreiraescutia/.lmstudio/models/lmstudio-community/DeepSeek-R1-Distill-Qwen-7B-GGUF/DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf
Para empezar a trabajar hacemos click en: -> "Start New Chat"
Aparece una ventana donde nos aparece el modelo que se acaba de descargar y para cargar el modelo hacemos click en: -> "Load Model"
Ya está listo para hacerle alguna pregunta y escribimos el siguiente "Prompt":
What is the best LLM small model?
Y nos contesta con:
deepseek-r1-distill-qwen-7b
Thoughts
Thought for 20.76 seconds
Okay, so I'm trying to figure out what the best small language models are right now. I know there's been a lot of advancements in AI, especially with large models like GPT-3 and T5 dominating the scene. But for someone who doesn't have access to those big models, or maybe wants something more compact for specific tasks, smaller models must be the way to go.
First off, I remember hearing about Flamingo. It's from Hugging Face, right? I think it's tailored for translation tasks. Since translating languages is a niche but important task, especially with more languages becoming available, this model seems pretty useful. But how does it compare to other options?
Then there's M2, which I believe is by Meta (formerly Facebook). It supports multiple languages and has some features like summarization. That could be great for general tasks beyond just translation. But I'm not sure about its performance compared to others.
BloomReach comes to mind too. It's supposed to handle retrieval systems and multi-tasking, which is useful for applications that need both searching and other functionalities. But if the focus is solely on a specific task like translation or summarization, would BloomReach be the best fit?
I'm also thinking about the Hugging Face Transformers library itself, as it provides access to various models. For someone just starting out, maybe trying out their small model hub could be a good way to experiment without needing to set up everything from scratch.
Wait, but what makes a model "best"? It probably depends on the use case—like if you need something for translation vs. summarization vs. retrieval. Each of these models (Flamingo, M2, BloomReach) seems strong in their specific areas. Maybe the best approach is to evaluate based on the tasks one needs to perform.
I should also consider factors like model size, inference speed, and accuracy. Smaller models are lighter but might be less accurate than larger ones. However, for practical applications where efficiency is key, they can be more suitable.
So, if someone wants a small model for translating between several languages, Flamingo would be the way to go because it's designed for that. For summarization across multiple languages and domains, M2 seems like a solid choice due to its multilingual capabilities.
BloomReach, on the other hand, is more about retrieval and general multitasking, which might not be as specialized but could offer versatility in certain scenarios.
In summary, the best small model depends on the specific task at hand. For translation, Flamingo; for summarization, M2; and for retrieval/multitask, BloomReach. But also considering the ease of use with libraries like Hugging Face Transformers can help choose the right one without a steep learning curve.
The best small language models are tailored to specific tasks, each offering unique advantages:
Flamingo: Optimized for multilingual translation, making it ideal for translating between several languages. Its specialized design ensures efficiency and accuracy in translation tasks.
M2 (Meta): Offers multilingual capabilities with features like summarization, making it versatile for various general tasks across multiple languages and domains. It balances performance and ease of use effectively.
BloomReach: Designed for retrieval systems and multitasking, this model is useful in applications requiring both searching and additional functionalities, providing a balance between versatility and compact size.
When choosing a small model, consider the specific task—whether it's translation, summarization, or general multitasking—and utilize libraries like Hugging Face Transformers to easily access and implement these models without a steep learning curve. Each model excels in its niche, allowing for efficient and effective solutions depending on your needs.
26.92 tok/sec 763 tokens 0.23s to first token Stop reason: EOS Token Found
Usted se encuentra Aquí >
Principal >
Biblioteca >
Manuales de Instalación > Instalación de LM Studio
Portal de Tecnología desarrollado por: Rogelio Ferreira Escutia