Abstract Scope |
Silicon prediction in a steel blast furnace is crucial for maintaining high-quality metal production and optimizing furnace operations. In this study, we explore the use of decoder-only transformer neural networks (GPT-like models) for modeling time-series silicon data collected from a Midwestern steel blast furnace. While transformers are traditionally associated with text processing, we adapt the decoder architecture to directly ingest numerical tabular time-series data. Our experiments involve training a GPT model from scratch on silicon data, implementing various data preprocessing techniques, and evaluating different architectural modifications. We find that the model exhibits learning capability and, in some cases, produces promising predictions. However, further refinements are needed to enhance consistency and accuracy. We discuss the advantages of decoder-only transformers for time-series forecasting, the challenges encountered, GPU operational conditions, and potential future improvements. Our findings suggest that transformer-based architectures could play a significant role in time-series modeling for industrial applications. |