1 Comment

After this circulated for a while, I'd like to address common objections and cruxes:

1. Many ML experiments may not be bottlenecked not on software-engineer hours, but on compute. See https://www.lesswrong.com/posts/auGYErf5QqiTihTsJ/what-indicators-should-we-watch-to-disambiguate-agi?commentId=kNHivxhgGidnPXCop. [Interesting point. It has been communicated to me that researchers inside labs are bottlenecked by compute surprisingly often.]

2. AI's being bad at research ideation is just an elicitation issue that's going to get solved soon; writing Google Docs with good ideas might also be much faster in a year. [I have no idea how true this is.]

3. The space wait calculation thing is not an accurate intuition pump for singular projects, except for the pretraining example. [Correct. Any particular ongoing research project benefits from being started earlier. The field as a whole (or my own overall research output) benefits from deferring ideas that are not temporally privileged and are more easily automated later.]

Expand full comment