This study presents a method for efficiently reducing the complexity of trained neural networks by identifying and removing redundant neurons that do not contribute to model performance. The proposed technique applies perturbations to the target variable, then detects and removes weights with significant deviations from a control model. Unlike traditional pruning methods, this approach not only simplifies networks but also enhances their efficiency. The method successfully eliminates up to 90% of active weights while maintaining or improving predictive performance. The research demonstrates the effectiveness of this technique in two real-world applications: predicting chemical reaction yields and assessing molecular affinity.
Takeaways:
- Many trained neural networks contain redundant neurons that do not contribute to accurate predictions.
- Excessive model complexity increases computational costs and hinders deployment in resource-limited environments.
- The proposed method removes up to 90% of network weights while preserving or improving model efficiency.
- Unlike traditional pruning, this technique enhances predictive accuracy alongside network reduction.
- The approach automates the selection of essential neurons, reducing computational time and resources.
- The method was validated through applications in chemical reaction yield prediction and molecular affinity assessment.
- The technique has broad applicability across machine learning tasks, especially in fields requiring efficient AI models.