Hello again! This blog had nearly been consigned to the dustbin of my memory when I realised that all the reading I do is useless unless properly processed and stored for future analysis. For me, writing certainly helps me process and remember facts and ideas better. The vast deluge of information surrounding us tends to be overwhelming for me. Finding connections between theories and statements and ideas is key to understanding what's going on in the world, rather than just other people's opinions. How does one make connections in the face of so much junk though? Unless you remember facts, there's no way to make the connections while reading, and piecemeal facts, like a branch without a tree, are first to be junked. My brain is like a leaky sieve, and I think I need to form connections while reading itself, so I can save those poor lonely piecemeal facts. Cling on to my lifeboat! Let's take the brain on a learning game this year. :)
Today's article - a Guardian long piece about whether scientific theories will disappear in the face of big data. This was based on a 2008 article. Chris Anderson, editor of the Wired hypothesised (an ironic choice of word if ever there was one) at that time that this would happen to science soon, in the face of new data and new computational capability. Has it though? My major question about having AI tools analysing relationships between data is - are AI tools actually tools? Are they given instructions by researchers or do they look at data afresh without the need for guidance? If they require guidance, the hand of the scientist cannot be removed from the analysis, and AI will just become another useful method to study data with the scientist making the inferences. If not, the scientist becomes merely a black box operator, positing answers to real world problems without understanding why. And isn't the basis of much of human endeavour a quest to answer the "why"? What's the point of us if we cannot understand. Am I, if I don't think?