Saturday, January 18, 2025

Why is ChatGPT so much better than Gemini?


OK, I think we know where this is going to go, but let’s ask ChatGPT anyway. Perhaps some glimmer of new thinking may emerge?

“Comparing ChatGPT and Gemini on technical questions covering physics, philosophy, history, economics and politics, ChatGPT is significantly better, operating at doctoral level while Gemini operates like an overconfident new undergraduate. Given the level of technical expertise at Google, I'm struggling to understand why Gemini is not better. Are the guardrails too restrictive? Or does Google fear cannibalisation of its search business…?”


ChatGPT

Your observation highlights a paradox: Google, with its immense technical expertise, has produced a model (Gemini) that, in your experience, underperforms OpenAI's ChatGPT on complex, technical topics. There are several potential reasons for this discrepancy, which likely involve a combination of technical, strategic, and business considerations. Let’s unpack them:

1. Guardrails and Safety Constraints

Google may have opted for stringent safety measures to avoid generating controversial or harmful content. While OpenAI also employs guardrails, they are arguably more balanced to preserve functionality in tackling nuanced topics. Gemini’s guardrails could be overly restrictive, truncating its ability to engage deeply with complex or sensitive subjects.

Impact: Guardrails can inhibit the model's ability to present diverse perspectives, delve into contentious theories, or weigh competing interpretations—all essential for tackling subjects like philosophy, economics, and politics.

2. Training Focus and Dataset

OpenAI and Google might prioritize different aspects during model training. If Google leaned toward creating a generalist assistant optimized for casual or everyday use, its training data might have emphasized breadth over depth, leading to a more superficial handling of technical subjects.

Impact: While Gemini might excel in accessibility and conversational friendliness, it could lack the depth to engage with doctoral-level queries due to differences in the corpus it was exposed to or in the weighting of technical knowledge during fine-tuning.

3. Strategic Business Positioning

Google's core business is search, which drives massive ad revenue. A highly capable AI model like ChatGPT could disrupt this by reducing reliance on search engines. Google may be deliberately positioning Gemini as a complementary tool that nudges users toward traditional search when deeper expertise is required.

Impact: If Gemini defers to search results rather than synthesizing knowledge directly, it could appear less capable in answering complex questions. This would safeguard Google’s search revenue while still showcasing its AI capabilities.

4. Developmental Stage

Gemini might still be in a relatively early phase of its lifecycle compared to ChatGPT, which has benefited from extensive fine-tuning, user feedback, and iteration. If Gemini lacks comparable refinement or exposure to real-world user queries, its performance will naturally lag.

Impact: The perception of Gemini as a “new undergraduate” may reflect its developmental immaturity rather than its ultimate potential.

5. Philosophical Design Differences

Google may have intentionally designed Gemini to reflect a less assertive epistemic stance. ChatGPT has been explicitly optimized for reasoning, synthesis, and critical analysis—qualities associated with advanced academic performance. If Gemini is instead geared toward avoiding definitive claims or controversial takes, this could limit its intellectual engagement.

Impact: This design philosophy might resonate better with a general audience but frustrates users seeking rigorous, doctoral-level discussion.

6. Competitive Pressures

OpenAI’s ChatGPT has achieved prominence as a model of excellence, setting a high benchmark for conversational AI. If Google underestimated the competitive landscape or focused on different metrics of success (e.g., multimodal capabilities), this could explain why Gemini’s depth in technical domains lags.

Impact: Google may now be playing catch-up in areas like technical proficiency, even as it potentially outpaces competitors in other areas (e.g., integration with search or multimodal processing).

7. Corporate Risk Aversion

Google is a highly visible, highly regulated company with immense scrutiny from governments and the public. It may be more conservative in releasing powerful AI systems due to concerns about reputational or regulatory backlash, particularly in areas like misinformation or bias.

Impact: This risk aversion could lead to a model that prioritizes cautious responses over depth or assertiveness, particularly in contentious fields like politics and philosophy.

Conclusion:

The perceived disparity between ChatGPT and Gemini likely reflects a combination of guardrails, strategic positioning, and developmental priorities. While ChatGPT may feel more robust on technical topics, it operates within a framework that balances depth with accessibility. Gemini’s underperformance might be intentional, designed to align with Google's broader business and ethical goals, or simply a reflection of its stage of refinement.

Should Google decide to prioritize technical expertise, Gemini could potentially close the gap—but not without reevaluating its trade-offs between safety, functionality, and market strategy.


Not too much surprising there, but a comprehensive and intelligent assessment of the issues. I'm good with that.

Risk aversion? You don't say!

Now, you're going to say, why don't we ask the same question to Gemini? 

I’m expecting a bland, cautious and risk-averse response which evades the main issues. Let’s see …


Gemini

There are several potential reasons why Gemini might not be performing as well as ChatGPT on technical questions, even though Google has a high level of technical expertise:

Focus on Different Strengths: Gemini might be designed with a different focus than ChatGPT. It could be optimized for tasks like image generation, code generation, or multimodal understanding, rather than purely text-based question-answering. This could explain its lower performance on technical questions within specific domains like physics, philosophy, history, economics, and politics.

Data Limitations: The quality and quantity of data used to train Gemini could be a factor. If the training data is not comprehensive or accurate enough in these specific domains, Gemini's ability to answer complex questions might be limited.

Algorithmic Differences: The underlying algorithms and architectures of the two models could be different. ChatGPT might have a more sophisticated architecture or training process that allows it to excel in these specific areas.

Evaluation Metrics: The way Gemini is evaluated might not fully capture its capabilities. If the evaluation metrics are primarily focused on factual accuracy, it might miss out on other important aspects of performance, such as the ability to provide nuanced explanations or engage in complex reasoning.

Safety and Bias Mitigation: Google might have implemented stricter safety and bias mitigation measures in Gemini, which could sometimes lead to more conservative or less informative responses. This could be perceived as lower performance, especially when compared to a model that is less constrained.

Regarding your concerns about guardrails and cannibalization of the search business:

Guardrails: While it's possible that overly restrictive guardrails could hinder Gemini's performance, it's also important to consider the potential negative consequences of an unconstrained AI model. Guardrails are essential for ensuring safety, fairness, and ethical behavior.

Cannibalization: It's true that a powerful AI model like Gemini could potentially impact Google's search business. However, Google could also leverage Gemini to enhance its search capabilities, providing users with more comprehensive and informative answers. This could lead to a new and improved search experience, rather than cannibalizing the existing business.

It's important to note that Gemini is still under development, and its capabilities are likely to improve over time. As Google continues to refine the model and address its limitations, we can expect to see significant advancements in its performance across various tasks, including technical question-answering.

Here are some additional points to consider:

Context: The specific questions and the context in which they are asked can significantly impact the performance of both models.

Human Evaluation: Human evaluation plays a crucial role in assessing the quality of AI-generated responses. Different human evaluators might have different perspectives and priorities, which can lead to varying assessments of performance.

Transparency: Lack of transparency regarding the training data, algorithms, and evaluation methods can make it difficult to understand the limitations and biases of AI models.

By carefully considering these factors and engaging in open and transparent discussions, we can gain a better understanding of the strengths and weaknesses of both ChatGPT and Gemini, and work towards developing AI models that are truly beneficial to society.


Neither response is particularly interesting, but Gemini somehow - as I often find - seems to miss the point, producing bland, unfocused boilerplate. It doesn't really address the implications of the question as asked. ChatGPT is so much more acute.

No comments:

Post a Comment

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.