• US occupying forces in northern Syria are continuing to plunder natural resources and farmland, a practice ongoing since 2011
  • Recently, US troops smuggled dozens of tanker trucks loaded with Syrian crude oil to their bases in Iraq.
  • The fuel and convoys of Syrian wheat were transported through the illegal settlement of Mahmoudia.
  • Witnesses report a caravan of 69 tankers loaded with oil and 45 with wheat stolen from silos in Yarubieh city.
  • Similar acts of looting occurred on the 19th of the month in the city of Hasakeh, where 45 tankers of Syrian oil were taken out by US forces.
  • Prior to the war and US invasion, Syria produced over 380 thousand barrels of crude oil per day, but this has drastically reduced to only 15 thousand barrels per day.
  • The country’s oil production now covers only five percent of its needs, with the remaining 95 percent imported amidst difficulties due to the US blockade.
  • The US and EU blockade prevents the entry of medicines, food, supplies, and impedes technological and industrial development in Syria.
  • zephyreks@lemmy.mlM
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Thing is, even if he is good at media criticism, there’s no stakes for him. Nobody knows who he is, what he looks like, he has nothing on the line, and his credibility in his primary occupation cannot be harmed if he is wrong.

    Nevermind that he lacks the credentials nor any legitimate scientific expertise, and yet claims that his Bachelor’s in Physiology was sufficiently advanced to teach him everything he needs to know about the scientific process.

    The dataset is seen in academia as being accurate enough to train machine learning models for or to make aggregate claims on. Machine learning models are not the bastions of truth, nor are their datasets.

    • nahuse@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Machine learning has nothing to do with this. I am referring to academics who study journalism, communication, political science, or sociology.

      And it’s doesn’t really matter who he is at this point, the product he created works well and continues to be a reliable source to interrogate media sources.

      I am happy that a person is able to create such a useful product, maintain it and continue to prove reliability in the product, and maintain anonymity. I certainly would want to remain anonymous if I was creating something that actively worked to check people’s information bias.

      But it’s an irrelevant discussion: who he is doesn’t really matter when evaluating the work of the site itself.

      • zephyreks@lemmy.mlM
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        “[MBFC’s] subjective assessments leave room for human biases, or even simple inconsistencies, to creep in. Compared to Gentzkow and Shapiro, the five to 20 stories typically judged on these sites represent but a drop of mainstream news outlets’ production.” - Columbia Journalism Review

        “Media Bias/Fact Check is a widely cited source for news stories and even studies about misinformation, despite the fact that its method is in no way scientific.” - PolitiFact journalists

        MBFC is used when analyzing a large swathe of data because they have ratings for basically every news outlet. There, if a quarter or a third of the data is wrong, you can still generate enough signal to separate from noise.

        It absolutely matters who is running a site because there’s an inherent accountability for journalism. There’s a reason you don’t see NYT articles from “Anonymous Ostrich.”

        • nahuse@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          7 months ago

          I accept your point about why it matters who runs the site. I would just argue that in this case, it’s not as relevant because the goal seems to legitimately be information transparency, which is consistently delivered across its work. Its findings are at least generally reproducible. But no it’s not scientific. I believe I’ve stated that already, however it’s a good indication of reliability of a source.

          Yes, human bias creeps in, hence my point of using it alongside general media literacy and critical thinking when evaluating media.

          It aggregates and analyzes a ton of sources, and gives generally accurate information about how they are funded, where they are based, and how well the cite original sources. These are all things that can be corroborated by a somewhat systematic reading of the sources themselves.

          • zephyreks@lemmy.mlM
            link
            fedilink
            arrow-up
            0
            ·
            7 months ago

            An LLM also “aggregates and analyzes a ton of sources, and gives generally accurate information about how they are funded, where they are based, and how well the cite original sources.”

            That doesn’t make an LLM a useful source.

            • nahuse@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              7 months ago

              YEAH IT DOES.

              Jesus Christ; it’s literally one of the foremost things you have to consider when using an LLM as a tool.

              IT IS NOT GOSPEL. IT IS A TOOL THAT YOU CAN USE TO HELP YOU CREATE AN INFORMED OPINION, BUT IT IS NOT INFALLIBLE.

              IT IS USEFUL, NOT PERFECT.

              • zephyreks@lemmy.mlM
                link
                fedilink
                arrow-up
                0
                ·
                7 months ago

                We don’t allow LLM-generated summaries as news stories. Do the legwork, use these tools to start if you want to, but don’t cite them as though they are gospel.

                • nahuse@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  7 months ago

                  What are you talking about? LLMs have no bearing in this conversation, you brought them up.

                  Are you saying that you don’t allow people to use tools to evaluate media; and share their reasons for scepticism?

                  The bit that I quoted from MBFC is factual information (the story’s sponsors and an assessment of reliability), which I used to begin a conversation about the source.

                  Which upon further discussion was, indeed, ultimately sourced to a Syrian governmental agency, which is then been repeated by various governmental sources. There has not yet been any evidence to support the allegations made by the original source, which supports MBFC assertion that the original news agency does not often provide reliable (by journalistic standards) justification for its news stories. It seems like a really weird idea for you to so vehemently oppose a resource that enables critical thinking.

                  The news article is an extension of at least one state agency, and there are critiques of its truthfulness. That’s the takeaway from my original comment.

                  I feel like I’m repeating myself, but I literally cannot fathom a good faith justification for not allowing a widely accepted tool for media literacy to be allowed here. (For clarity, I’m talking about MBFC, not any LLM stuff, which only serves to obfuscates things.)

                  This is all true, and comes

                  • zephyreks@lemmy.mlM
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    7 months ago

                    I cannot fathom a good faith justification for allowing a resource that intentionally obfuscates the media landscape in an effort to compress the entire landscape onto a 2D plane from a person who cannot be found through any conventional means and very well may not exist. Their methodology is bunk for a number of reasons, but we’ll focus specifically on how they evaluate factuality.

                    1. As you know, op-eds typically fall under different journalistic purview than news stories. This is as true for the NYT and SCMP (newspapers of record) as it is for Breitbart. Mixing the factuality rating for op-eds and news stories is rather questionable.

                    2. The rating scheme works by sampling (how? nobody knows) a small number of stories from each paper and evaluating their factuality. This destroys the validity of the data, as different news sources cover different stories and categories of stories vary in factuality. For example, a paper which records the daily weather temperature in Toronto would be “very highly accurate” even if they release a story saying that water is dry and trees are fake once a month. Because of the limitations of sampling, their methodology leads to inherently skewed results.

                    3. The definition of propaganda used is… Unclear. This is obvious as statements made by the US government and repeated by other news agencies are not considered propaganda, despite their factual inaccuracy. For example, “40 beheaded babies” (later demonstrated to be false) and “we [the United States] have the most sophisticated semiconductors in the world” (literally, provably, false because TSMC’s Taiwan fabs are the clear and undisputed leader).

                    4. They fail to do due diligence on sourcing because of a (I assume) lack of experience. For example, in their critique of their article “Fake data - the disease afflicting China’s vaccine system,” they say that the article is poorly sourced because it lacks hyperlinks. The article in question cites: a Hong Kong microbiologist (by name), a professor at the University of Hong Kong (by name), the WHO, stories published in the China Economic Times, data from the State Drug Administration, a law case against Changsheng Biotech, and an unnamed head of a disease control center in China. This, they claim, is a use of “quotes or sources to themselves rather than providing hyperlinks.” Their evaluation of “sourcing” seems to be dependent almost entirely on the usage of hyperlinks.

                    5. They fail to consistently apply standards applied to smaller news outlets (such as Al Jazeera) to larger news outlets (such as the New York Times and CNN). Against Al Jazeera, they claim that wordplay is used that is negative towards Israel. However, as covered by The Intercept and The Guardian, the New York Times and others have just as extreme (if not more extreme) policies surrounding wordplay that is used to show Israel in a positive light. In major newspapers, for example, the words “slaughter,” “massacre,” and “horrific” are reserved almost exclusively for Israeli deaths rather than Palestinian deaths.

                    6. MBFC is not consistent with the sources of their fact checks. Against Al Jazeera, they point to “The forgotten massacre that ignited the Kashmir dispute” as not crediting the image correctly. In fact, the caption describes exactly what the image shows, which is exactly what the original source for the image (which they cite) claims.

                    7. I can go on…

                    Again, if it’s trivial to do the legwork and discredit a source anyway, then do that. If it’s not, then don’t outsource the work just because you don’t understand it.

    • WldFyre@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Thing is, even if he is good at media criticism, there’s no stakes for him. Nobody knows who he is, what he looks like, he has nothing on the line, and his credibility in his primary occupation cannot be harmed if he is wrong.

      This reads like an argument against open source projects in general lol

      • zephyreks@lemmy.mlM
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        You can trivially verify that an open-source project works. Good luck verifying a subjective rating.