Is Traditional Machine Learning Still Worth It in the Age of Generative AI?

In recent years, the rapid progress of generative artificial intelligence and large language models has raised a common question within the data science community: is traditional machine learning still relevant? With the success of deep neural networks in areas such as computer vision, speech recognition, and natural language processing, many professionals have started to wonder whether classical algorithms such as logistic regression, decision trees, SVM, or Random Forest have become obsolete.

This discussion has gained momentum in recent articles, such as the one published by Lazy Programmer on Medium, which directly questions whether classical machine learning is “dead.”

In practice, however, the scenario is quite different from this perception. Traditional machine learning continues to be widely used, especially in problems involving structured data, which are predominant in corporate environments. Customer data, financial transactions, economic indicators, and sensor records are typically organized in tables. For this type of information, algorithms such as Gradient Boosting, Random Forest, or linear models often deliver excellent performance. Comparative studies show that in tasks involving tabular data, classical models frequently achieve performance comparable to — or even better than — deep neural networks.

Another important factor is computational cost. Deep learning models usually require large volumes of data and more advanced hardware infrastructure for training. Classical algorithms, on the other hand, can often be trained quickly on standard computers and with smaller datasets, making them more practical for many business projects.

Interpretability is also a key point. In sectors such as finance, healthcare, and insurance, many automated decisions must be explainable. Models like logistic regression or decision trees allow practitioners to clearly identify which variables influence predictions. In contrast, deep neural networks tend to be more difficult to interpret, which may limit their adoption in regulated contexts.

This does not mean that deep learning is not revolutionary. On the contrary, it has enabled impressive advances in problems involving unstructured data, such as images, audio, and text. Deep neural networks can automatically extract relevant features directly from raw data, reducing the need for manual feature engineering, which is often required in traditional algorithms.

Even so, the current artificial intelligence landscape points to a coexistence of different approaches, rather than a complete replacement of classical techniques. In many modern systems, pipelines combine deep learning and traditional models: neural networks are used to extract complex data representations, while classical algorithms perform tasks such as classification, ranking, or prediction.

Furthermore, tools widely used in industry — such as XGBoost, LightGBM, and CatBoost — remain extremely popular for tabular data problems and business forecasting.

Therefore, the idea that traditional machine learning is dead does not reflect reality. What we are actually observing is an expansion of the artificial intelligence ecosystem. While deep learning dominates applications involving complex data and generative models, classical algorithms continue to play a fundamental role in many practical applications.

For professionals in the field, the most effective strategy is not to choose one side, but rather to understand the strengths of each approach and know when to use each technique.

2 thoughts on “Is Traditional Machine Learning Still Worth It in the Age of Generative AI?”

  1. I totally agree. One can see the same situation in programming: although the latest decade environments brought simplification and automatism, the programmers that know and govern deep programming knowledge are the ones on the frontline and better suited to shape the digital future, be it in cybersecurity or quantum computing. There is a silent recognition, in the intelligence comunity, that some best minds work for highly paid criminal tasks , or as government watchdogs. A programmer with a superficial knowledge of elementary programming can never beat, for ex., the one that knows Assembly, C, C++, or similar. In the world governed by the computers, a skilled programmer can do unbelievable damage not only to persons or organizations, but also to whole countries, if on the wrong side or with a twisted ideology. So, it’s not only IA or military organizations that represent a threat to humanity, it’s also individuals with deep knowledge and the will to do whatever it takes to realize their goals. Beware of your nerd neighbors!!

Leave a Reply to Angelo Cancel Reply

Your email address will not be published. Required fields are marked *