I read Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks (worldscientific.com/doi/abs/10). Which suggests starting with smaller sized images, scaling up as you find good ranges for hyperparameters. Does anyone know of similar work but with respect to the training set size? Can I run a search using 1/100, 1/10, 1/4 of my training set and expect the hyperparameters to carry over when I train on all of my data?

“Denis pressed the stud on his watchband that halted the persistent prickling, then studied the message crawling across the Omega’s digital strip.” -Julian May, “The Metaconcert”, 1987.

scene set in: 2013

Not a bad bit of prognostication, imo.

🚀 Scifi!

Scifi.Fyi is a general-purpose mastodon instance that seeks to foster a welcoming and inclusive community!

We run glitch-soc, a version of mastodon with experimental new features!

We also host our own version of Pinafore, an entirely new frontend for Mastodon, at starship.scifi.fyi. Try it out!