type
status
date
slug
summary
tags
category
icon
password
A simple deep learning framework designed for educational purposes and small-scale experiments. TensorPlay provides basic building blocks for constructing and training neural networks, including tensor operations, layers, optimizers, and training utilities.
⭐ Support This Project
If you find
TensorPlay
useful for learning and experimentation, please consider supporting the repository by giving it a star on GitHub
. Your support motivates continued development and improvements to this educational tool. I'm also open to feedback and suggestions for making TensorPlay
even more beginner-friendly and effective for learning deep learning concepts.🔍 Why TensorPlay is Beginner-Friendly
TensorPlay
is a lightweight, educational deep learning framework built from scratch in Python. For newcomers, TensorPlay
strips away the complex optimizations and redundant features of industrial - grade frameworks like TensorFlow
and PyTorch
. It hones in on the core logic of computational graphs, making it a perfect tool to grasp the computational processes of deep learning at a fundamental level.🚀 Features
- Core Tensor Structure: Implements a
Tensor
class with automatic gradient computation, supporting basic arithmetic operations and common activation functions (ReLU
,Sigmoid
,Tanh
,Softmax
,GELU
).
- Neural Network Components: Includes essential layers like
Dense
(fully connected layer) and aModule
base class for building custom models.
- Training Utilities: Provides
DataLoader
for batching data, training / validation loop helpers (train_on_batch
,valid_on_batch
), and optimization withAdam
optimizer.
- Early Stopping: Built-in
EarlyStopping
callback to prevent overfitting.
- Loss Functions: Implements common loss functions such as
MSE
(Mean Squared Error),SSE
(Sum of Squared Errors), and NLL (Negative Log Likelihood).
🔰 Basic Usage
1. Define a Model
Create custom models by inheriting from
Module
and defining the forward pass:2. Prepare Data
Use
DataLoader
to handle batching and shuffling:3. Train the Model
Train the model using the
train_on_batch
function. A typical training loop includes batch training, validation, and early stopping judgment:4. Evaluate the Model
🧩 Example: KRK Chess Endgame Classification
The
demo/KRK_classify.py
script demonstrates classifying chess endgame positions (King-Rook-King) as either a draw or not. Key steps:- Data Loading: Parses
krkopt.data
into numerical features.
- Data Preparation: Splits data into training, validation, and test sets.
- Model Definition: Uses a 3-layer fully connected network (
KRKClassifier
).
- Training: Uses
Adam
optimizer with early stopping.
- Evaluation: Computes test accuracy.
Run the example:
🔧 Limitations and Future Improvements
Current Limitations
- Only supports 1D
tensors
, no higher-dimensional data (matrices, images).
- Limited layer types (only
Dense
is implemented).
- Basic optimizer support (only
Adam
is available).
- No
GPU
acceleration; all operations areCPU
bound.
- Limited debugging tools for computation graphs.
Planned Improvements
- Add support for n-dimensional tensors (
matrices
,3D tensors
).
- Implement more layers (
Dropout
,BatchNorm
).
- Support GPU acceleration via
CUDA
for faster training.
- Improve automatic differentiation efficiency.
- Include more loss functions (
Cross-Entropy
,MAE
).
- Add visualization tools for computation graphs and training metrics.
📚 Related Resources
Here are some helpful resources to deepen your understanding of deep learning concepts:
- Introduction to Hook - Comprehensive Guide to Hook Functionality
🤝 Contributing
Contributions are welcome! Feel free to open issues for bugs or feature requests, or submit pull requests with improvements.