LLM Everything
search
⌘Ctrlk
Github
LLM Everything
  • 📃前言
  • 🎚️基础部分
  • 🐬Prompt Engineering
  • 🦖Transformer
    • Tokenizer
    • Embeddings
    • Positional Encoding
    • Self Attention
    • Multi-Head Attention
    • Add & Norm
    • FeedForward
    • Linear & Softmax
    • Decoding Strategy
  • 🪿LLM应用
  • 🎄LLM训练
  • 🐒MoE
gitbookPowered by GitBook
block-quoteOn this pagechevron-down

🦖Transformer

Tokenizerchevron-rightEmbeddingschevron-rightPositional Encodingchevron-rightSelf Attentionchevron-rightMulti-Head Attentionchevron-rightAdd & Normchevron-rightFeedForwardchevron-rightLinear & Softmaxchevron-rightDecoding Strategychevron-right
PreviousTree of Thoughtschevron-leftNextTokenizerchevron-right