Shutterstock 478583230

NeuralCapsule Free

Contemporary confederacies of dunces now coalesce around very stable genius it seems..

Recent Comments

  1. 1 day ago on Scott Stantis

    The definition of “we” is so often misconstrued here. In the early stages of the commercialization of the internet, there was the widely held perception among tech people that everyone could just run their own servers and internet services, which is, strictly speaking, true.. but also now obviously irrelevant.

    Beyond the general purpose systems like ChatGPT the public are playing with (and the very general business tasks so called ‘prompt engineering’ can adapt them to), even the ‘open source’ AI models require a minimum of many tens of thousands of dollars and some seriously specialized specialized knowledge to ‘fine tune’ to one’s purposes (and of course many millions of dollars / teams of math and CS specialists to create from scratch).

    Which means that this technology is even more inherently oligopolistic than the original internet (which nevertheless centralized on massive network effect social media like Facebook, hyper scale cloud providers like AWS and edge POPs like Cloudflare..)

    So, saying “we can just turn it off” is true, so long as ‘we’ is understood to be the trillionaires and autocrats who will actually make decisions like that..

  2. 1 day ago on Scott Stantis

    So no actual points then..? K, Thx !

  3. 2 days ago on Scott Stantis

    Excuse the delay, busy couple of days in the tech world..

    It appears you feel like you are making some kind of point here by being unimpressed.. Okay..

    Do you think that because you are unimpressed software, legal, medical and other record management groups are not using LLMs to radically redesign business processes in ways that are already resulting in large scale layoffs and will continue to do so through the decade you mentioned?

    As an enterprise software consultant involved in large scale business systems, I can personally state that they are doing so.

    Do you think LLMs will not totally change what little remains of journalism in the same period? Politics (you know the stuff these comics we’re debating are about)? These are going through massive (and one would think obvious) generative AI driven changes as we speak..

    Basically, all the things related to human language information processes that were incrementally more productive after various editors and the piecemeal automation of macros, regex, and then autocorrect, autocomplete and various (internal) search engines are all being superceded by tech that is rapidly on its way to taking instructions that a senior programmer, writer, billing coder, whatever would currently give to junior staff and execute autonomously..

    I’m not really sure why the fact that LLMs are unlikely to be replacing the senior staff makes it any less catastrophic for junior staff (and upcoming graduates) just as I am unclear as to why the inability to do logic proofs makes this tech any less devastating in the public ICT sphere we all must try to stay informed in.. I mean, in a democracy, is it really any comfort that the top n percent minority of citizens continue to be well informed…?

    So, yeah, LLMs can’t do physics and math.. If we were talking about this on a math substack then, sure.. You do get that this is a political cartoon referencing AI and creatives, right? Any creative work will be upended by LLMs.. And soon.

  4. 4 days ago on Scott Stantis

    So, the salient question becomes one of learning, not a recapitulation of the symbolic logic AI winter.

    Here, we must leave behind the “copy/paste on steroids”, “stochastic parrot” type definition of LLMs and consider what LLMs really are:

    Machine Learning is the general term for statistical learning, which really only began to eclipse the then dominant ‘expert systems’ in the age of GPU parallel tensor processing of unlimited data stores, leading to techniques like neural networks and, increasingly sophisticated ‘deep’ types of such networks like recurrent, convolutional and, importantly transformers..

    In their 2017 paper “Attention Is All You Need”, Google scientists introduced the architecture of LLMs, which, through truly massive (approaching trillions) of parameters and ‘self attention’ repetitive weighting of all words in its corpus maps a kind of sophisticated topological map of every likely use of all ingested words..

    This, to me, is the greatest difference with ‘auto complete’.. As we prompt an LLM it is not merely predicting the most likely word sequence, it is navigating an unfathomably complex stochastic topology in a way unique to that interaction, which is why ‘prompt engineering’ is now the new hot kool kidz kareer..

    Beyond that (which is actually a function of the computation cost of training LLMs) I think the nature of these topologies ought to prompt us to think on the nature of learning and knowledge representation in our own neuronal structures..

    LLMs do not ‘think’ in any way that we understand thinking, and I remain firmly in the no AGI camp, but I am not at all confident that these systems are incapable of learning other human thought patterns beyond languages with equal rapidity and fluency..

    Such systems, even if they are never capable of engaging in mathematics or physics as humans do, nevertheless are already well on the way to becoming force multipliers for almost any typical form of human communication..

    ..which is a bit scary.

  5. 4 days ago on Scott Stantis

    Oxford Languages (née Dictionaries) defines intelligence as:

    “the ability to acquire and apply knowledge and skills”

    ..by this admittedly concise definition, LLMs are, at least according to the current research in IEEE, ACM, Sage, etc., demonstrating advanced (if artificial) applications of knowledge and skill.. I am happy to provide specific references to peer review research for the various applications in diverse areas such as in my previous post, and, of course, to explore the intricacies of the more complex definitions of intelligence..

    Considering the specific aspect of LLMs and logic, while it is currently correct that these models are incapable of any reasoning that follows mathematics or physics, there are already promising lines of work to address these deficiencies by leveraging symbolic logic (which had been the main thrust of AI research previously) without losing the broad ability of generative approaches.

    Ex.“Coupling Large Language Models with Logic Programming”

    “Learning Non-linguistic Skills without Sacrificing Linguistic Proficiency”

    “Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning”

    “Solving Math Word Problems by Combining Language Models With Symbolic Solvers”

    “Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs”

    It is certainly early days for this type of LLM application and none of these will be ready to engage in even the most basic interaction that probes these kinds of logic, but I would be very hesitant to bet against the development in those areas over the next decade, to say nothing of the next few years..

    While there are many more aspects of a (more) complete definition of intelligence that bear on the Dystopian social aspects of LLMs as I briefly touched on in my post above, this area is very important in that, while not very important to the potential impact of LLMs in society, gets to the heart of what most consider ‘true’ intelligence..

    (continues)

  6. 5 days ago on Scott Stantis

    So, in the ancient world of early computers, programming languages evolved from mere convenience mnemonics over op-codes to increasingly sophisticated systems of logic which shaped the future of system designs themselves..

    ..as tempting as it is for some to equate these languages (in their 4th generation and onwards) to human language, there was always a hard divide between these deterministic languages and the ‘natural’ languages we co-evolved with as a species..

    ..now we have large language models (LLMs) which run on deterministic, stateful machines but are stochastic black boxes and somehow in trillions of weight activation numeric ‘neurons’ are rapidly manipulating human language..

    ..these same LLMs are rapidly progressing in areas from legal discovery in terabytes of complex professional legal documents to drug discovery in petabytes of bioinformatics by going against unstructured medical records and research publications..

    ..we may say that LLMs “have no soul or depth” but the fact is that they are already capable of passing any graduate level writing exam and of imitation of writers living and dead in ways that all but literature researchers could not detect..

    ..the (Dystopian) future is already here, and in addition to running a nail gun over the coffin lid of believable online information, LLMs will expose the lie of cherished human terms which we have always resorted to without any precise definitions, like soul, intelligence, sapience, consciousness, sentience..

    ..such terms are, like democracy, camaraderie and empathy, so powerful precisely because we can feel connected to others through them by believing that our shared evocation of them is the product of some deeply shared values, and not merely of widely held misconception..

    ..in the brave new world of LLMs (which is already upon us) these cherished terms, like Turing’s puckish test, will fall away in a supernova of inhuman abilities very possibly rapidly heading beyond our species’ ken…

  7. 9 days ago on Gary Varvel

    It is great that you are not opposed to higher education, but is it really just a fancy form of vocational training in your analysis..? I get the way it is marketed, the distasteful practices around maximizing subsidies, and that this is the only driver for the majority of the students paying tuition (taking out loans, etc), but that is not the mission of universities, it is the mission of trade and vocational schools, which are the vast majority of those with 95% acceptance.

    In America, high school does not prepare anyone for any career worth the designation anymore, this has been true at least since the beginning of the millenium.. We need post secondary training to succeed as a (the?) preeminent global economy and that runs the gamut from vocational school to post doctoral research : universities are, frankly speaking, the critical leaders in this equation.

    Of course, this is not the only place where universities (must) lead.. Research, for those who are not involved in the process, is not (and never was) some lone genius in a lab full of Pyrex percolator props, it is a highly collaborative globally interconnected (and often unavoidably political) process. What manufacturers we have – and most importantly what manufacturers we are likely to have in our future – are determined directly by the research ability of our universities, NOT by proprietary subsequent downstream technology applications at corporations. This is the vital national interest of having a vibrant system of universities – we fsck with this at our peril.

  8. 10 days ago on Gary Varvel

    I’ve only been here (more off than on as of late) for a few years, but even in that span the comment quality has sunk drastically, so, if I’m going to do anything here, it will be solely out of interest in some increasingly rare, limited aspects of threads as I can find them.

    The comics like Varvel, Lester, Payne, Goodwyn, Bok, Gorrell, et al are well known, as are the dynamics of the ‘featured comment’, so, I feel like once the name calling coalesces into threads, I will try to comment where I can say something.. The repetitive and well known biases here are frankly uninteresting to me at this point.

  9. 10 days ago on Gary Varvel

    Yeah.. I’m really feeling my age trying to keep up with the math of language transformer models.

  10. 10 days ago on Gary Varvel

    Interesting exchange between you and cwg here.

    I concur that vocational training and work ethics are necessary to this nation’s future, but they are not at all sufficient.

    While some college degrees are surely useless for employment, I would like to see any serious examples of college degrees generally seen as ‘red flags’ ..hyperbole is one thing, but BS is another thing, no?

    I have also taken several degrees (info sys, cs and applied math) and, have used that to build a nice niche in Higher Ed IT, which especially down here in the SEC, is a business, first and foremost.

    There are a lot of degrees that could be justifiably called revenue generators, and they are sadly proliferating. That said, the viable degrees down here are often literally developed with direct input from the major regional employers ; they are anything but ‘red flags’.

    While the type of ‘IT’ the three of us were likely trained in back in the day may be fading, that is not even slightly true for the IT industry, which is poised to usurp more of what today’s Americans view as good jobs than CAD/CAM ever did.

    In my consulting roles, I deal with C-suite uni admins who talk seriously about DEI missions / hires / promotion req’s (which MIT just abandoned BTW) – I make no apologies for them or the damage they do – that said, I am more sure than ever that the classic humanist educational principles that all universities derive from are more important than ever before.

    In a decade or so we are going to be faced with existential challenges from artificial intelligence (I work with LLMs), climate change and the body blows we have dealt to our governments in the developed world.

    Political leadership will absolutely not be able to handle this, and if we do not have professionals who can handle complex diverse problems, we are well and truly doomed.

    This is the real red flag.