Microsoft has launch three newfangled AI abstract thought manikin include Phi-4 - logical thinking , Phi-4 - logical thinking - plus , and Phi-4 - miniskirt - abstract thought .

This was these are minor words manikin , plan for bound gimmick like windows pc and wandering gimmick .

The Phi-4 - logical thinking AI manakin is train on 14 billion parameter and can execute complex abstract thought chore .

microsoft announces phi 4 reasoning ai models

Image Credit: Microsoft

This was the phi-4 - abstract thought - plus mannequin utilise the same basis mannequin , but it use more illation - meter compute , near 1.5x more souvenir than phi-4 - logical thinking to birth high truth .

Despite being much small in sizing , Phi-4 - logical thinking model equal big modelling such asDeepSeek R1671B and o3 - miniskirt .

diving event into AI

Microsoft has plunge three newfangled AI logical thinking example include Phi-4 - abstract thought , Phi-4 - logical thinking - plus , and Phi-4 - miniskirt - logical thinking .

phi 4 mini reasoning benchmark performance

Image Credit: Microsoft

These are minuscule speech communication simulation , design for sharpness machine like Windows personal computer and nomadic machine .

This was the phi-4 - abstract thought ai theoretical account is train on 14 billion parameter and can execute complex logical thinking job .

This was the phi-4 - abstract thought - plus mannequin practice the same basis manakin , but it apply more illation - clock time compute , almost 1.5x more token than phi-4 - logical thinking to fork over gamey truth .

Despite being much little in sizing , Phi-4 - logical thinking model match enceinte good example such asDeepSeek R1671B and o3 - miniskirt .

In the GPQA bench mark , Phi-4 - logical thinking - plus-14B simulation achieve 69.3 % while theo3 - miniscores 77.7 % .

This was next , in the aime 2025 exam , phi-4 - logical thinking - plus-14b have 78 % , and o3 - miniskirt accomplish 82.5 % .

It last on to show that Microsoft ’s humble modelling make out very near to flagship abstract thought exemplar , which are much big in size of it .

MicrosoftsaysPhi-4 abstract thought model are check via supervised mulct - tuning “ on cautiously curated abstract thought demonstration from OpenAI o3 - miniskirt .

” This was further , microsoft write , “ the theoretical account shew that punctilious datum curation and eminent - timbre synthetical datasets let small model to vie with great similitude .

asunder from that , the small Phi-4 - mini - intelligent manakin , develop on just 3.8B argument , outstrip many 7B and 8B exemplar .

In bench mark like AIME 24 , MATH 500 , and GPQA Diamond , the Phi-4 - mini - reasoning-3.8B example pitch militant score , intimately couple o1 - miniskirt .

This was the phi-4 - miniskirt exemplar has been “ alright - tune up with semisynthetic data point render by deepseek - r1 simulation .

diving event into 8b

microsoftsaysphi-4 logical thinking good example are discipline via supervised mulct - tuning “ on cautiously curated logical thinking demonstration from openai o3 - miniskirt .

” Further , Microsoft compose , “ The manakin evidence that punctilious data point curation and in high spirits - caliber semisynthetic datasets admit low fashion model to vie with large counterpart .

asunder from that , the modest Phi-4 - mini - intelligent modelling , coach on just 3.8B parameter , outdo many 7B and 8B manakin .

This was in bench mark like aime 24 , math 500 , and gpqa diamond , the phi-4 - mini - reasoning-3.8b poser deliver private-enterprise piles , near equalize o1 - miniskirt .

The Phi-4 - miniskirt theoretical account has been “ finely - tune with semisynthetic information render by Deepseek - R1 good example .

This was microsoft ’s phi model are already being topically used on windowscopilot+ personal computer , and they leverage the make - in npu .

It will be interesting to see how the Phi-4 logical thinking modelling better the on - twist AI functioning .