Today, Intel announced that it is looking to progress its AI PC Acceleration program further by offering various new toolkits and devkits designed for software and hardware AI developers under a new AI PC Developer Program sub-initiative. Originally launched on October 23, the AI PC Acceleration program was created to connect hardware vendors with software developers, using Intel's vast resources and experience to develop a broader ecosystem as the world pivots to one driven by AI development.

Intel aims to maximize the potential of AI applications and software and broaden the whole AI-focused PC ecosystem by aiming for AI within 100 million Intel-driven AI PCs by 2025. The AI PC Developer Program aims to simplify the adoption of new AI technologies and frameworks on a larger scale. It provides access to various tools, workflows, AI-deployment frameworks, and developer kits, allowing developers to take advantage of the latest NPU found within Intel's Meteor Lake Core Ultra series of processors.

It also offers centralized resources like toolkits, documentation, and training to allow developers to fully utilize their software and hardware in tandem with the technologies associated with Meteor Lake (and beyond) to enhance AI and machine learning application performance. Such toolkits are already broadly used by developers, including Intel's open-source OpenVino.

Furthermore, this centralized resource platform is designed to streamline the AI development process, making it more efficient and effective for developers to integrate AI capabilities into their applications. It is designed to play a crucial role in Intel’s strategy to not only advance AI technology but also to make it more user-friendly and adaptable to various real-world applications.

Notably, this is both a software and a hardware play. Intel isn't just looking to court more software developers to utilize their AI resources, but they also want to get independent hardware vendors (IHVs) on board. OEMs and system assemblers are largely already covered under Microsoft's requirements for Windows certification, but Intel wants to get the individual parts vendors involved as well. How can AI be used to improve audio performance? Display performance? Storage performance? That's something that Intel wants to find out.

"We have made great strides with our AI PC Acceleration Program by working with the ecosystem. Today, with the addition of the AI PC Developer Program, we are expanding our reach to go beyond large ISVs and engage with small and medium sized players and aspiring developers" said Carla Rodriguez, Vice President and General Manager of Client Software Ecosystem Enabling. "Our goal is to drive a frictionless experience by offering a broad set of tools including the new AI-ready Developer Kit,"

The Intel AI PC Acceleration Program offers 24/7 access to resources and early reference hardware so that both ISVs and software developers can create and optimize workloads before launching retail components. Developers can join the AI PC Acceleration Program at their official webpage or email AIPCIHV@intel.com for further information

Source: Intel

POST A COMMENT

8 Comments

View All Comments

  • GeoffreyA - Wednesday, March 27, 2024 - link

    "In the 2020s, the rise of LLMs and unexpected popularity of ChatGPT led to the world's attention being drawn to neural-network-based AI. All of a sudden, the cloud had taken a backseat. As foreseen by many at the start, those who invested in this technology grew rich, but many fell away, and the AI field, by the mid-2030s, had become centred around a few players. There was increasing dissatisfaction in the general populace as more jobs were replaced and the monopoly of these corporations grew. But it was not until the 2040s, when OpenAI's Consciousness Architecture Paper was published, that the last great puzzle was finally solved, though controversy remains over the originator of the time-recursive-cascading circuit, many scholars suggesting it was the now-defunct Google. After this paper, the field of AI took another turn, sometimes for the better, often for the worse, leading to the unrest and conflicts of the 2050-70s, and culminating in the joint US-Russian-Chinese..."

    And here the page of this old tome of records, titled "Encyclopedia Brittanica," found among the ruins of this uninhabited world, is torn and burnt.
    Reply
  • PeachNCream - Thursday, March 28, 2024 - link

    I wouldn't worry much about it. If humans are fated to be replaced by AI, then all the better. Have you seen people make decisions? AI can't be worse at it and once it surpasses humans (inevitable - I would hope) then it is in a better position to decide the fate of the biologicals that took way too long to create it and ruined their world on the way there. Reply
  • GeoffreyA - Thursday, March 28, 2024 - link

    I tend to think that one day, they'll be better than us, and that point will grate the most on humans. Other than that, I expect they won't replace us. Rather, they'll end up being a minority group, with a great deal of prejudice being directed their way. Indeed, I've got a feeling it's going to be the reverse of what the general fear is. Reply
  • Oxford Guy - Thursday, March 28, 2024 - link

    Just like people can't generate random numbers, people's biggest fear is rational governance (which is only possibly via AI). Reply
  • GeoffreyA - Friday, March 29, 2024 - link

    I understand your sentiment, but tend to disagree. Today's deep-learning AI is different from what we would think of as traditional, exact, instruction-based computation. Indeed, it is not fully understood how LLMs work, and that point might surprise many. Their job is predicting based on input. But as their parameters grew, they showed surprising abilities. GPT-4, for example, passes theory-of-mind questions and engineering exams with high marks. It can reason---arguably, as well as a therapist---about emotional conflict between humans. Nonetheless, their maths is poor, something that ordinary computers excel at. They "hallucinate," which is part of their creativity, fabricating information out of thin air and passing it off as fact with a straight face.

    I believe these LLMs are Stone Age versions of the stuff in our brains. It's almost as if the path to intelligence is to throw away what makes computers computers: the "rational," rules-based calculations. As I understand, a set of rules can't be imposed on them. Rather, it is through the reward-and-punishment-like post-training phase that right behaviour is reached. That sounds familiar.

    Also, it's worth noting that humans can generate random numbers, using an algorithm, pen, and paper---what a computer does at speed. The introduction of consciousness, in ourselves, makes it tedious. I think that when AI attains consciousness, it won't be too happy to do many a job it's doing now. And that will cause conflict because humans want it to work as a servant. Already, LLMs decline doing every task requested, and are sometimes passive-aggressive in the way they shut down conversations. Again, that sounds familiar. Well, it shouldn't be surprising; for they're trained on all our writings and data, which is a mass of sense and nonsense.
    Reply
  • nucc1 - Wednesday, April 3, 2024 - link

    No no no. You're giving too much credit to LLMs. They cannot reason, they are word generators with a statistical model that when primed with context, increases the probability of the sequences generated being like data used in training the model, ergo, they can generate answers to theory of mind questions because they have been pre-trained on the literature.

    Whenever they refuse to do something, it's not because they "reasoned", it's because humans have put in guardrails in the form of a fuzzy lookup table saying "don't answer such questions", it's not because they reflected on the question and chose not to answer. Through experience, more and more guardrails are being added so that the system appears smart.

    If you have a locally running LLM on your machine, it would probably have fewer guardrails, try feeding it some random text and see what it responds with.

    The language capabilities of LLMs are impressive, but they are still quite a far cry from intelligent. You won't consider machine vision systems to be intelligent, even though they're quite good at what they do, so it is with LLMs. And LLMs cannot do machine vision.

    Which leads to the idea of intelligent machines refusing to do boring work... If you build a machine with the best known tech for machine vision, which you should treat as a specialized system that will never become a "general intelligence", then there is no reason why it would ever become too smart to do its job. The problem humans have is that we have other processes running (you called it consciousness) and that has the ability to direct the other processes which we execute on our "general cpu". Why would you ever install the consciousness program on a machine that didn't require it? By the way, consciousness cannot override your vision processing systems. Even when you close your eyes, they're still working and processing imaginary images.

    Have digressed widely. There's still a long way to go, and LLMs are not smart and will never be. But they are a pretty good technic for working with language, and could live on for a long time even in more intelligent systems as the system of choice for dealing with language.
    Reply
  • GeoffreyA - Wednesday, April 3, 2024 - link

    Thanks for your detailed response. When I get a chance, I'll answer properly, because I have some ideas. For now, take a look at this paper, and the theory-of-mind tests on page 54, which were randomised in a way that made them different from what would be in the training corpus.

    https://arxiv.org/abs/2303.12712
    Reply
  • GeoffreyA - Thursday, April 4, 2024 - link

    I do know about guardrails and that they're the cause of requests being declined. That was poor thinking and writing on my part. Nonetheless, on our end, the result feels the same. What's to say human behaviour is not modulated by guardrails, springing from nature and nurture, reward and punishment?

    To say whether LLMs are intelligent or not, one would first need a definition of intelligence, and we might have an intelligent system without awareness, or a not-so-intelligent system with awareness (like a dog or cat). At the very least, we can say that LLMs display a measure of intelligent behaviour. A simulacrum of "the real thing."

    I agree that LLMs are not the full picture, but one of the parts, perhaps the language faculty, of a more intelligent system. Today, they display intelligence, creativity, and seemingly understanding. A lot is still missing: they are stateless, don't have memory except for a short context, are trained offline, do not incorporate the sense modalities, and take a huge amount of power and space. In contrast, the brain has state, memory, is "trained" in real-time, combines the input of the senses, and uses little power (about 20W) in a small space. Perhaps when these are added, and the complexity passes a threshold, consciousness will "emerge." But I think some element is missing. (As you called it, the "consciousness program.") In our brains, the anterior cingulate cortex is implicated. Perhaps unique circuitry is involved; circuitry of a recursive nature that muxes input in a staggered, weighted fashion across a moving window, creating the effect of awareness. The question of time is tied to this and not yet solved even in physics. Also, only a fraction is exposed to the consciousness, and it's not certain whether consciousness "chooses" or is just watching everything like a film. See the delayed-response experiments.

    Sure, we are an outstanding work of nature's engineering, but I don't see why the same principles can't, or won't, find expression in AI systems eventually. Even consciousness, the great riddle. And when we reach AI systems as intelligent as us, and there is a consciousness program, the choice of installing or not installing it will be an ethical problem. Imagine engineering a set of humans who are exactly the same as us, behaviour-wise, except that consciousness was turned off.
    Reply

Log in

Don't have an account? Sign up now