从这里开始
指南
▼
▲
Persistence
Spring持久化指南
REST
使用Spring构建REST API指南
Security
Spring Security指南
关于
English
分类: Deep Learning
>> How Do Large Language Model Operations (LLMOps) Work?
>> Machine Learning vs. Deep Learning
>> GPT-4o 介绍
>> 使用 Transformer 进行目标检测
>> Building a GPT Tokenizer
>> Why Are Residual Connections Important in Transformer Architectures?
>> One-Hot Encoding Explained
>> Probability: Joint vs. Marginal vs. Conditional
>> Introduction to Large Language Models
>> Comparative Analysis of Top Large Language Models
>> What Exactly Is an N-Gram?
>> An Introduction to Gemini by Google DeepMind
>> Value Iteration vs. Policy Iteration in Reinforcement Learning
>> Attention vs. Self-Attention
>> Bias in Neural Networks
>> Introduction to Convolutional Neural Networks
>> What Is a Policy in Reinforcement Learning?
>> F-1 Score for Multi-Class Classification
>> Neural Network Architecture: Criteria for Choosing the Number and Size of Hidden Layers
>> Advantages and Disadvantages of Neural Networks Against SVMs
>> Batch Normalization in Convolutional Neural Networks
>> Random Initialization of Weights in a Neural Network
>> Epoch in Neural Networks
>> Solving the K-Armed Bandit Problem
>> Encoder-Decoder Models for Natural Language Processing
>> Ugly Duckling Theorem
>> Word Embeddings: CBOW vs Skip-Gram
>> Semantic Similarity of Two Phrases
>> Open Source Neural Network Libraries
>> Trade-offs Between Accuracy and the Number of Support Vectors in SVMs
>> Using a Hard Margin vs. Soft Margin in SVM
>> State Machines: Components, Representations, Applications
>> k-Nearest Neighbors and High Dimensional Data
>> How to Create a Smart Chatbot?
>> Why Mini-Batch Size Is Better Than One Single “Batch” With All Training Data
>> Feature Selection and Reduction for Text Classification
>> Outlier Detection and Handling
>> Word2vec Word Embedding Operations: Add, Concatenate or Average Word Vectors?
>> Relation Between Learning Rate and Batch Size
>> Using GANs for Data Augmentation
>> The Effects of the Depth and Number of Trees in a Random Forest
>> Linearly Separable Data in Neural Networks
>> An Introduction to Generative Adversarial Networks
>> Applications of Generative Models
>> Image Processing: Occlusions
>> Algorithms for Image Comparison
>> Intuition Behind Kernels in Machine Learning
>> An Introduction to Contrastive Learning
>> Activation Functions: Sigmoid vs Tanh
>> Latent Space in Deep Learning
>> What Is Inductive Bias in Machine Learning?
>> Real-Life Examples of Supervised Learning and Unsupervised Learning
>> Hidden Layers in a Neural Network
>> Mean Average Precision in Object Detection
>> Convolutional Neural Network vs. Regular Neural Network
>> What Are “Bottlenecks” in Neural Networks?
>> Recurrent vs. Recursive Neural Networks in Natural Language Processing
>> Comparing Naïve Bayes and SVM for Text Classification
>> Off-policy vs. On-policy Reinforcement Learning
>> Cross-Validation: K-Fold vs. Leave-One-Out
>> Differences Between Backpropagation and Feedforward Networks
>> Neural Networks: Difference Between Conv and FC Layers
>> Neural Networks: What Is Weight Decay Loss?
>> Data Augmentation
>> Instance Segmentation vs. Semantic Segmentation
>> What Are Channels in Convolutional Networks?
>> What Does Backbone Mean in Neural Networks?
>> Parameters vs. Hyperparameters
>> Generative Adversarial Networks: Discriminator’s Loss and Generator’s Loss
>> What Are Embedding Layers in Neural Networks?
>> Recurrent Neural Networks
>> What Is Maxout in a Neural Network?
>> Deep Neural Networks: Padding
>> How Do Siamese Networks Work in Image Recognition?
>> Q-Learning vs. Deep Q-Learning vs. Deep Q-Network
>> Epoch or Episode: Understanding Terms in Deep Reinforcement Learning
>> Deterministic vs. Stochastic Policies in Reinforcement Learning
>> An Introduction to Deepfakes
>> An Introduction to Graph Neural Networks
>> Prevent the Vanishing Gradient Problem with LSTM
>> How Do AI Image Generators Work?
>> Saturating Non-Linearities
>> ReLU vs. LeakyReLU vs. PReLU
>> What Is Group Normalization?
>> GAN Implementation in PyTorch
>> How to Use Model Temperature of GPT?
>> Comparison Between BERT and GPT-3 Architectures
>> Training Data for Sentiment Analysis
>> Normalizing Inputs of Neural Networks
>> Differences Between Bias and Error
>> Random Sample Consensus Explained
>> Multi-Layer Perceptron vs. Deep Neural Network
>> Real-World Uses for Genetic Algorithms
>> The Reparameterization Trick in Variational Autoencoders
>> Autoencoders Explained
>> How to Calculate Receptive Field Size in CNN
>> Instance vs Batch Normalization
>> What Is the Difference Between Gradient Descent and Gradient Ascent?
>> Online Learning vs. Offline Learning
>> What Is End-to-End Deep Learning?
>> Model-free vs. Model-based Reinforcement Learning
>> How To Convert a Text Sequence to a Vector
>> Bias Update in Neural Network Backpropagation
>> 0-1 Loss Function Explained
>> What Does Pre-training a Neural Network Mean?