GitHubスライド | slidict.io
slidict.io

Installation

huggingface
huggingface
アクセス: 0回
最終更新: 2025/09/11
読む時間: 07:38

共有

コード

01

Installation

Transformers works with Python 3.9+ https://pytorch.org/get-started/locally/[PyTorch] 2.1+, https://www.tensorflow.org/install/pip[TensorFlow] 2.6+, and https://flax.readthedocs.io/en/latest/[Flax] 0.4.1+. Create and activate a virtual environment with https://docs.python.org/3/library/venv.html[venv] or https://docs.astral.sh/uv/[uv], a fast Rust-based Python package and project manager. [,py] ----

02

venv

python -m venv .my-env source .my-env/bin/activate

03

uv

uv pip install .[torch] ----

04

pip

pip install .[torch]

05

Quickstart

Get started with Transformers right away with the https://huggingface.co/docs/transformers/pipeline_tutorial[Pipeline] API. The `Pipeline` is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output. Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model. [,py] ---- from transformers import pipeline pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B") pipeline("the secret to baking a really good cake is ") [{'generated_text': 'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}] ---- To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to `Pipeline`) between you and the system. > [!TIP] > You can also chat with a model directly from the command line. > `+shell > transformers chat Qwen/Qwen2.5-0.5B-Instruct >+` [,py] ---- import torch from transformers import pipeline chat = [ {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."}, {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"} ] pipeline = pipeline(task="text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", dtype=torch.bfloat16, device_map="auto") response = pipeline(chat, max_new_tokens=512) print(response[0]["generated_text"][-1]["content"]) ---- Expand the examples below to see how `Pipeline` works for different modalities and tasks. Automatic speech recognition [,py] ---- from transformers import pipeline pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3") pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ---- Image classification [,py] ---- from transformers import pipeline pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer") pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png") [{'label': 'macaw', 'score': 0.997848391532898}, {'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', 'score': 0.0016551691805943847}, {'label': 'lorikeet', 'score': 0.00018523589824326336}, {'label': 'African grey, African gray, Psittacus erithacus', 'score': 7.85409429227002e-05}, {'label': 'quail', 'score': 5.502637941390276e-05}] ---- Visual question answering [,py] ---- from transformers import pipeline pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base") pipeline( image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg", question="What is in the image?", ) [{'answer': 'statue of liberty'}] ----

06

WhyshouldIuseTransformers?

. Easy-to-use state-of-the-art models: ** High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks. ** Low barrier to entry for researchers, engineers, and developers. ** Few user-facing abstractions with just three classes to learn. ** A unified API for using all our pretrained models. . Lower compute costs, smaller carbon footprint: ** Share trained models instead of training from scratch. ** Reduce compute time and production costs. ** Dozens of model architectures with 1M+ pretrained checkpoints across all modalities. . Choose the right framework for every part of a models lifetime: ** Train state-of-the-art models in 3 lines of code. ** Move a single model between PyTorch/JAX/TF2.0 frameworks at will. ** Pick the right framework for training, evaluation, and production. . Easily customize a model or an example to your needs: ** We provide examples for each architecture to reproduce the results published by its original authors. ** Model internals are exposed as consistently as possible. ** Model files can be used independently of the library for quick experiments.

07

Whyshouldn'tIuseTransformers?

* This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. * The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like https://huggingface.co/docs/accelerate[Accelerate]. * The https://github.com/huggingface/transformers/tree/main/examples[example scripts] are only _examples_. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.

08

100projectsusingTransformers

Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects. In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the community with the xref:./awesome-transformers.adoc[awesome-transformers] page which lists 100 incredible projects built with Transformers. If you own or use a project that you believe should be part of the list, please open a PR to add it!

09

Examplemodels

You can test most of our models directly on their https://huggingface.co/models[Hub model pages]. Expand each modality below to see a few example models for various use cases. Audio * Audio classification with https://huggingface.co/openai/whisper-large-v3-turbo[Whisper] * Automatic speech recognition with https://huggingface.co/UsefulSensors/moonshine[Moonshine] * Keyword spotting with https://huggingface.co/superb/wav2vec2-base-superb-ks[Wav2Vec2] * Speech to speech generation with https://huggingface.co/kyutai/moshiko-pytorch-bf16[Moshi] * Text to audio with https://huggingface.co/facebook/musicgen-large[MusicGen] * Text to speech with https://huggingface.co/suno/bark[Bark] Computer vision * Automatic mask generation with https://huggingface.co/facebook/sam-vit-base[SAM] * Depth estimation with https://huggingface.co/apple/DepthPro-hf[DepthPro] * Image classification with https://huggingface.co/facebook/dinov2-base[DINO v2] * Keypoint detection with https://huggingface.co/magic-leap-community/superpoint[SuperPoint] * Keypoint matching with https://huggingface.co/magic-leap-community/superglue_outdoor[SuperGlue] * Object detection with https://huggingface.co/PekingU/rtdetr_v2_r50vd[RT-DETRv2] * Pose Estimation with https://huggingface.co/usyd-community/vitpose-base-simple[VitPose] * Universal segmentation with https://huggingface.co/shi-labs/oneformer_ade20k_swin_large[OneFormer] * Video classification with https://huggingface.co/MCG-NJU/videomae-large[VideoMAE] Multimodal * Audio or text to text with https://huggingface.co/Qwen/Qwen2-Audio-7B[Qwen2-Audio] * Document question answering with https://huggingface.co/microsoft/layoutlmv3-base[LayoutLMv3] * Image or text to text with https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct[Qwen-VL] * Image captioning https://huggingface.co/Salesforce/blip2-opt-2.7b[BLIP-2] * OCR-based document understanding with https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf[GOT-OCR2] * Table question answering with https://huggingface.co/google/tapas-base[TAPAS] * Unified multimodal understanding and generation with https://huggingface.co/BAAI/Emu3-Gen[Emu3] * Vision to text with https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf[Llava-OneVision] * Visual question answering with https://huggingface.co/llava-hf/llava-1.5-7b-hf[Llava] * Visual referring expression segmentation with https://huggingface.co/microsoft/kosmos-2-patch14-224[Kosmos-2] NLP * Masked word completion with https://huggingface.co/answerdotai/ModernBERT-base[ModernBERT] * Named entity recognition with https://huggingface.co/google/gemma-2-2b[Gemma] * Question answering with https://huggingface.co/mistralai/Mixtral-8x7B-v0.1[Mixtral] * Summarization with https://huggingface.co/facebook/bart-large-cnn[BART] * Translation with https://huggingface.co/google-t5/t5-base[T5] * Text generation with https://huggingface.co/meta-llama/Llama-3.2-1B[Llama] * Text classification with https://huggingface.co/Qwen/Qwen2.5-0.5B[Qwen]

10

Citation

We now have a https://www.aclweb.org/anthology/2020.emnlp-demos.6/[paper] you can cite for the 🤗 Transformers library: [,bibtex] ---- @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ----

関連スライド

関連スライド1

slidictの概要

2025/05/31

関連スライド2

slidictの紹介

2025/05/31

関連スライド3

現在の活動整理

2025/05/31

関連スライド4

トレンドスライド機能の概要

2025/05/31

関連スライド5

slidictのフィードバック機能

2025/05/31

関連スライド6

slidictの自動スライド生成機能

2025/05/31

関連スライド7

slidictの資料集機能

2025/05/31

関連スライド8

slidictの「読了時間」機能

2025/05/31

関連スライド9

slidictの設計理念

2025/05/31

関連スライド10

slidictの設計理念

2025/05/31

関連スライド11

slidictの設計理念

2025/05/31

Background

スライド作成を
無料で始める

AIがあなたのスライドを自動生成。無料で、すぐに体験できます。

1 / 9