• trevor (he/they)@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Is it actually open source, or are we using the fake definition of “open source AI” that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            The code is open, weights are published, and so is the paper describing the algorithm. At the end of the day anybody can train their own model from scratch using open data if they don’t want to use the official one.

            • trevor (he/they)@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              The training data is the important piece, and if that’s not open, then it’s not open source.

              I don’t want the data to avoid using the official one. I want the data so that so that I can reproduce the model. Without the training data, you can’t reproduce the model, and if you can’t do that, it’s not open source.

              The idea that a normal person can scrape the same amount and quality of data that any company or government can, and tune the weights enough to recreate the model is absurd.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                link
                fedilink
                arrow-up
                0
                ·
                3 months ago

                What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn’t all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.

                • trevor (he/they)@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.

                  It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.

                  It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.

  • Sem@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    Deepseek collects and process all the data you sent to their LLN even from API calls. It is a no-go for most of businesses applications. For example, OpenAI and Anyhropic do not collect or process anyhow data sent via API and there is an opy-ouy button in their settings that allows to avoid processing of the data sent via UI.

    • hungrybread [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I’m too lazy to look for any of their documentation about this, but it would be pretty bold to believe privacy or processing claims from OpenAI or similar AI orgs, given their history flouting copyright.

      Silicon valley more generally just breaks laws and regulations to “disrupt”. Why wouldn’t an org like OpenAI at least leave a backdoor for themselves to process API requests down the road as a policy change? Not that they would need to, but it’s not uncommon for a co to leave an escape hatch in their policies.