5 Comments
User's avatar
Juan Herrera's avatar

Great post!

Expand full comment
Mark's avatar

Love the post and the series! I hope you come up with something similar in the future.

I also have a question. When you wrote

‘’’’A critical factor at play is the inherent “locality inductive bias” in neural networks, which can limit their ability to capture patterns that aren’t naturally localized in a tabular structure.’’’

Isn’t locality inductive bias only present in CNN or RNN architectures but not in fully connected networks and transformers?

Expand full comment
Daniel's avatar

I have been in the data science world for a short time and this kind of posts are helping me a lot to understand the fundamentals of ML and DS.

Expand full comment
Damon's avatar

What are your honest thoughts on TabPFN?

Expand full comment
Francisco Javier Arceo's avatar

Great stuff and very much agree.

Until neural networks can learn to sort and discretely partition attributes with few computations, XGBoost will continue to be all you need.

Expand full comment