Connect with us

Artificial Intelligence

Financial data from users’ tax filing websites has been sent to Facebook

blank

Published

on

blank

Services including TaxAct, TaxSlayer, and H&R Block were discovered transferring confidential data by the Markup.
The Markup has found that major tax preparation companies including H&R Block, TaxAct, and TaxSlayer have been covertly sending private financial data to Facebook when Americans file their taxes online.

Data on names and email addresses as well as frequently even more specific information, such as information on users’ income, filing status, refund amounts, and dependents’ college scholarship amounts, are provided through a commonly used code known as the Meta Pixel.
Whether or whether the person utilizing the tax filing service has an account on Facebook or other platforms run by its owner Meta, the information transmitted to the firm can be utilized to power its advertising algorithms.

The Internal Revenue Service processes around 150 million computerized individual tax returns each year, and The Markup discovered that the pixel is used by some of the most popular e-filing firms.

For instance, users of the well-known service TaxAct are required to give personal information in order to calculate their returns, such as their income and investment information. According to an examination by The Markup, a pixel on TaxAct’s website then relayed some of that information to Facebook, including users’ filing status, their AGI, and the size of their return. Refunds were rounded up to the nearest hundred and income to the nearest thousand. Additionally, the pixel transmitted dependents’ names in an obscured but typically reversible manner.

The Markup discovered comparable financial data—but not names—being provided to Google through its service by TaxAct, which claims to have about 3 million “consumer and professional users” on its website.
The Meta Pixel was used by other tax filing services besides TaxAct. The world’s largest provider of tax preparation services, H&R Block, which additionally provides an online filing option that draws millions of customers each year, integrated a pixel on its website that collected data on users’ use of health savings accounts and the grants and costs associated with dependents’ college tuition.

As part of Facebook’s “advanced matching” system, which collects information on website visitors in an effort to connect them to Facebook accounts, TaxSlayer, a different popular filing service, submitted personal information to the social media corporation. Phone numbers, the name of the user filling out the form, and the names of any dependents added to the return were among the data collected by the pixel on TaxSlayer’s website. Similar to TaxAct, Facebook was able to link a user to an existing profile despite the fact that precise demographic information about the person was obscured. According to TaxSlayer, 10 million federal and state tax returns were completed last year.

The Markup also discovered the pixel code on a tax preparation website run by Ramsey Solutions, a firm that provides software and financial planning services and makes use of a TaxSlayer service. From a tax return summary page, that pixel collected even more sensitive data, such as details on income and refund amounts. This information was only supplied after users clicked drop-down headings to view more of their report’s details on the website.
The pixel was used by even Intuit, the business that handles America’s leading online file system. However, Intuit’s TurboTax only sent usernames and the most recent sign-in time to Meta instead of financial information. The company completely removed the pixel from all sites after sign-in.

The protection of our customers’ data is something we take very seriously, according to Nicole Coburn, a TaxAct spokesman, in an email. “TaxAct always attempts to abide by all IRS laws.” The business “frequently evaluate[s] our processes as part of our continuous commitment to privacy, and will assess the information,” according to H&R Block spokesperson Angela Davied.

In an email, Ramsey Solutions spokesperson Megan McConnell stated that the business “installed the Meta Pixel to give a more tailored customer experience.”

The statement read, “We did NOT know and were never told that Facebook was collecting personal tax information through the Pixel.” We quickly notified TaxSlayer to deactivate the Pixel from Ramsey SmartTax as soon as we learned of it.

TaxSlayer removed the pixel to assess its use, according to spokesman Molly Richardson, who responded to The Markup’s email. She said that Ramsey Solutions “decided to remove the pixel” as well, stressing that “our customers’ privacy is of the utmost importance” and that “we take concerns regarding our customers’ information extremely seriously.”

While Intuit “may share some non-tax-return information, such as username, with marketing partners to deliver a better customer experience,” like not showing Intuit ads on Facebook to people who have accounts already, the company’s pixel “does not track, gather, or share information that users enter in TurboTax while filing their taxes.” The business claimed to be in accordance with laws but changed the pixel so that usernames are no longer sent.
The Markup’s results, according to Mandi Matlock, a tax law lecturer at Harvard Law School, reveal that taxpayers are “giving some of the most sensitive information that they own, and it’s being exploited.”

This is horrible, she remarked. “It is, really.”

After The Markup approached TaxAct for comment, the company’s website stopped sending financial information to Meta on Monday, but it still received dependents’ identities. The website kept sending Google Analytics money-related data. Additionally, as of Monday, TurboTax ceased sending usernames through the pixel at sign-in, and TaxSlayer and Ramsey Solutions disabled the pixel from their tax filing websites. The website for H&R Block was still disseminating information on college tuition assistance and health savings accounts.

How Meta Pixel monitors users
Anyone who wants the pixel code can get it for free from Meta, which gives companies the freedom to use it wherever they want on their websites.

The businesses and Facebook both benefit from using the code. When a customer visits a company’s website, the pixel may keep track of the things they browsed, like a T-shirt, for instance. The company can locate an audience that could already be interested in its items by targeting its Facebook advertisements to people who looked at that shirt.
Meta also benefits financially. The business claims it can use the information it gathers from devices like the pixel to power its algorithms, giving it knowledge of people’ online behaviors.

Facebook has seen success with this tactic. The business informed Congress in 2018 that there were over 2 million pixels on the web, a significant data collection effort that most internet users never saw.

The technique is widespread, according to Jon Callas, director of public interest technology at the Electronic Frontier Foundation, who described his reaction to The Markup’s findings as “shock but not surprise.”

The Markup’s analysis of sensitive data collection shows that some of it is related to the Meta Pixel’s default behaviors, while other instances appear to be the result of customizations made by tax filing services, people working on their behalf, or other software that has been installed on the website.

For instance, the normal setup of the Meta Pixel automatically collects the title of a page the user is seeing, along with the web address of the page and other data. This is how Meta Pixel gathered health savings account and college spending information from the H&R Block website. It was able to obtain salary data from Ramsey Solutions because it was presented as a summary that could be enlarged by clicking. The pixel identified the summary as being a button, because by default, the pixel captures text from a clicked button.
Automatic advanced matching was a feature used in the TaxSlayer and TaxAct pixels. This function examines forms for areas where it suspects there may be personally identifiable data, such as a phone number, first name, last name, or email address, and then transmits any such data it finds to Meta. This function on TaxSlayer’s website gathered contact information and the names of taxpayers and their dependents. It gathered dependents’ names on TaxAct.

According to Meta, the hashing method used to encrypt the data supplied by the matching feature is done so in order to “help preserve user privacy.” The pre-obfuscated version of the data may, however, usually be found by the corporation. In fact, Meta specifically used the hashed data to connect additional pixel data to Facebook and Instagram identities.

When The Markup created a test pixel linked to a business account, this pixel functionality was disabled by default but could be enabled by selecting a toggle during setup.

A “custom event,” which is sent only if the pixel is specified outside of the default by a website operator or another application the website operator adds to their site, is what TaxAct used to send dollar figures like adjusted gross income to Meta. Inquiries about whether and why TaxAct configured the pixel in this way went unanswered.

There are restrictions on the kinds of data that Meta claims the pixel will allow it to gather. The corporation claims that it uses automatic filtering to block potentially sensitive data and does not want sensitive information, including financial data, delivered to it. According to its help center, providing information such as bank account or credit card details or “knowledge regarding an individual’s financial account or status” is prohibited.

Still, The Markup discovered that two tax sites supplied Facebook one specific form of banned data – income. TaxAct may have also been transmitting a parameter with the name “student loan interest” before the pixel started filtering it before it was delivered, according to data it supplied to Facebook.
The Markup monitored websites’ pixel usage from January to July of this year as part of the Pixel Hunt, a collaboration with Mozilla Rally. Participants in the initiative installed a browser extension that gave The Markup a copy of all the information given with Meta via the pixel.

Through data given by Pixel Hunt participants, The Markup first learned that tax preparers were disclosing sensitive information. The Markup subsequently created accounts on the businesses’ websites and used the “Network” portion of Chrome DevTools, a feature included with Google’s Chrome browser, to reproduce and validate the data.

The Markup discovered sensitive data transferred to Facebook earlier this year with the aid of Pixel Hunt participants on the Education Department’s federal student aid application website, crisis pregnancy websites, and the websites of prominent hospitals.

Because Meta gathers so much information, occasionally even the firm doesn’t know where it goes. In a leaked memo from Facebook’s privacy engineers earlier this year, Vice reported that the firm couldn’t guarantee it wouldn’t use specific data for specific objectives because it “does not have an acceptable level of control and explainability over how our systems use data.”
Facebook has “extensive systems and controls to handle data and comply with privacy standards,” a corporate spokeswoman claimed at the time, according to Vice.

Dale Hogan, a representative for Meta, referred to the organization’s policies on sensitive financial information in answer to The Markup’s inquiries over the use of the pixel by the tax websites.

Hogan stated in an email that advertisers “should not transmit sensitive information about people through our Business Tools.” “Doing so is against our regulations, and we train advertisers on how to set up Business tools correctly to avoid this,” the statement reads. Our technology is built to weed out any potentially sensitive information it can find.

An email from a Google representative, Jackie Berté, stated that the company “has strict policies against advertising to people based on sensitive information” and that Google Analytics data is “obfuscated, meaning it is not tied back to an individual.” Additionally, she added, “our policies prohibit customers from sending us data that could be used to identify a user.”

Tax data is strictly regulated by the IRS.
Between 2001 and 2019, Nina Olson, the executive director of the nonprofit Center for Taxpayer Rights, served as the Internal Revenue Service’s national taxpayer advocate, a position in the organization designed to represent the interests of taxpayers.

She helped draft the rules governing the disclosure of tax information as part of her responsibilities at the IRS. Olson stated that the IRS standards governing the use of data by private tax filing firms are “extremely stringent” on purpose.

According to the rules she helped create, tax preparers, including e-filing companies, are only permitted to use the information that taxpayers provide for certain limited purposes; anything beyond simply facilitating filing requires the user’s signed consent that specifies the recipient and the specific information being disclosed.

Even the font size of requests for disclosure is regulated by the government, which states that it must be “the same size as, or larger than, the typical or standard body text used by the website or software program.”

While Olson said she was not aware of any criminal cases that had been pursued, the penalty for sharing data without consent could be severe: fines and even jail time are possible.

The Markup searched the websites of tax preparation services for disclosures that expressly named Facebook or Meta, but it was unable to locate any. Some businesses, however, incorporated rather extensive disclosure agreements.

For instance, TaxAct asked customers to consent to the sibling firm, TaxSmart Research LLC, receiving their tax information so that it may “create, promote, and provide goods and services” for users. TaxSmart Research LLC may work with service providers and business partners to complete these responsibilities, it was further stated. In contrast, H&R Block included almost the same disclosure request so that “H&R Block Personalized Services, LLC” could offer its own products. Although users had the choice to opt out of sharing tax information with Facebook on certain sites, The Markup’s tests revealed that data was shared with Facebook regardless of the users’ choices.

According to Olson, any disclosure by a tax preparer must specify the precise objective and recipient in order to be in compliance. Do they have a list stating that they will reveal the return amounts, your children, and whatever else on Facebook? she questioned. If not, they might be breaking the law. Regarding whether any of the websites that shared tax information were in violation of the law, the IRS declined to comment or respond to any inquiries.

There is no escape for taxpayers
There aren’t many options available to American taxpayers outside using private businesses to file their taxes.

In contrast to other nations, the United States has a substantially privatized tax filing system that frequently necessitates the employment of outside tax preparers. In other nations, the taxpayers simply give their approval to the estimates that the government does. However, as a result of a successful lobbying campaign by private businesses, tax preparers in the US now serve as the official go-between for taxpayers and the government.

Today, tax preparation is a significant sector in the United States, worth more than $11 billion, according to market research.

Although there is a free preparation and filing alternative, it is only available to those making $73,000 or less and might be challenging to utilize. Companies are accused for not making the option easily accessible even when they provide their tax software at no cost as part of an agreement with the IRS.

The Markup discovered using the pixel that the IRS even successfully guides taxpayers attempting to file for free to some of the businesses. The Free File Alliance, an arrangement including a few tax preparation firms, includes TaxAct and TaxSlayer. H&R Block and TurboTax have previously participated in the program.

Harvard’s Matlock claimed that The Markup’s findings demonstrated the nearly unavoidable implications of entrusting a government requirement to for-profit businesses. According to her, the procedure leaves users with no alternative but to give their data to Facebook in order to comply with the law.

It’s aggravating, she added, since taxpayers are being forced into the hands of these private, for-profit businesses in order to fulfill their tax filing duties. “Really, we don’t have a choice in the issue.”

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

Google DeepMind Shows Off A Robot That Plays Table Tennis At A Fun “Solidly Amateur” Level

blank

Published

on

blank

Have you ever wanted to play table tennis but didn’t have anyone to play with? We have a big scientific discovery for you! Google DeepMind just showed off a robot that could give you a run for your money in a game. But don’t think you’d be beaten badly—the engineers say their robot plays at a “solidly amateur” level.

From scary faces to robo-snails that work together to Atlas, who is now retired and happy, it seems like we’re always just one step away from another amazing robotics achievement. But people can still do a lot of things that robots haven’t come close to.

In terms of speed and performance in physical tasks, engineers are still trying to make machines that can be like humans. With the creation of their table-tennis-playing robot, a team at DeepMind has taken a step toward that goal.

What the team says in their new preprint, which hasn’t been published yet in a peer-reviewed journal, is that competitive matches are often incredibly dynamic, with complicated movements, quick eye-hand coordination, and high-level strategies that change based on the opponent’s strengths and weaknesses. Pure strategy games like chess, which robots are already good at (though with… mixed results), don’t have these features. Games like table tennis do.

People who play games spend years practicing to get better. The DeepMind team wanted to make a robot that could really compete with a human opponent and make the game fun for both of them. They say that their robot is the first to reach these goals.

They came up with a library of “low-level skills” and a “high-level controller” that picks the best skill for each situation. As the team explained in their announcement of their new idea, the skill library has a number of different table tennis techniques, such as forehand and backhand serves. The controller uses descriptions of these skills along with information about how the game is going and its opponent’s skill level to choose the best skill that it can physically do.

The robot began with some information about people. It was then taught through simulations that helped it learn new skills through reinforcement learning. It continued to learn and change by playing against people. Watch the video below to see for yourself what happened.

“It’s really cool to see the robot play against players of all skill levels and styles.” Our goal was for the robot to be at an intermediate level when we started. “It really did that, all of our hard work paid off,” said Barney J. Reed, a professional table tennis coach who helped with the project. “I think the robot was even better than I thought it would be.”

The team held competitions where the robot competed against 29 people whose skills ranged from beginner to advanced+. The matches were played according to normal rules, with one important exception: the robot could not physically serve the ball.

The robot won every game it played against beginners, but it lost every game it played against advanced and advanced+ players. It won 55% of the time against opponents at an intermediate level, which led the team to believe it had reached an intermediate level of human skill.

The important thing is that all of the opponents, no matter how good they were, thought the matches were “fun” and “engaging.” They even had fun taking advantage of the robot’s flaws. The more skilled players thought that this kind of system could be better than a ball thrower as a way to train.

There probably won’t be a robot team in the Olympics any time soon, but it could be used as a training tool. Who knows what will happen in the future?

The preprint has been put on arXiv.

 

Continue Reading

Artificial Intelligence

Is it possible to legally make AI chatbots tell the truth?

blank

Published

on

blank

A lot of people have tried out chatbots like ChatGPT in the past few months. Although they can be useful, there are also many examples of them giving out the wrong information. A group of scientists from the University of Oxford now want to know if there is a legal way to make these chatbots tell us the truth.

The growth of big language models
There is a lot of talk about artificial intelligence (AI), which has grown to new heights in the last few years. One part of AI has gotten more attention than any other, at least from people who aren’t experts in machine learning. It’s the big language models (LLMs) that use generative AI to make answers to almost any question sound eerily like they came from a person.

Models like those in ChatGPT and Google’s Gemini are trained on huge amounts of data, which brings up a lot of privacy and intellectual property issues. This is what lets them understand natural language questions and come up with answers that make sense and are relevant. When you use a search engine, you have to learn syntax. But with this, you don’t have to. In theory, all you have to do is ask a question like you would normally.

There’s no doubt that they have impressive skills, and they sound sure of their answers. One small problem is that these chatbots often sound very sure of themselves when they’re completely wrong. Which could be fine if people would just remember not to believe everything they say.

The authors of the new paper say, “While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying.” This is something that anyone who has ever had a fight with Alexa or Siri will know all too well.

“LLMs aren’t meant to tell the truth in a fundamental way.”

It’s simple to type a question into ChatGPT and think that it is “thinking” about the answer like a person would. It looks like that, but that’s not how these models work in real life.

Do not trust everything you read.
They say that LLMs “are text-generation engines designed to guess which string of words will come next in a piece of text.” One of the ways that the models are judged during development is by how truthful their answers are. The authors say that people can too often oversimplify, be biased, or just make stuff up when they are trying to give the most “helpful” answer.

It’s not the first time that people have said something like this. In fact, one paper went so far as to call the models “bullshitters.” In 2023, Professor Robin Emsley, editor of the journal Schizophrenia, wrote about his experience with ChatGPT. He said, “What I experienced were fabrications and falsifications.” The chatbot came up with citations for academic papers that didn’t exist and for a number of papers that had nothing to do with the question. Other people have said the same thing.

What’s important is that they do well with questions that have a clear, factual answer that has been used a lot in their training data. They are only as good as the data they are taught. And unless you’re ready to carefully fact-check any answer you get from an LLM, it can be hard to tell how accurate the information is, since many of them don’t give links to their sources or any other sign of confidence.

“Unlike human speakers, LLMs do not have any internal notions of expertise or confidence. Instead, they are always “doing their best” to be helpful and convincingly answer the question,” the Oxford team writes.

They were especially worried about what they call “careless speech” and the harm that could come from LLMs sharing these kinds of responses in real-life conversations. What this made them think about is whether LLM providers could be legally required to make sure that their models are telling the truth.

In what ways did the new study end?
The authors looked at current European Union (EU) laws and found that there aren’t many clear situations where an organization or person has to tell the truth. There are a few, but they only apply to certain institutions or sectors and not often to the private sector. Most of the rules that are already in place were not made with LLMs in mind because they use fairly new technology.

Thus, the writers suggest a new plan: “making it a legal duty to cut down on careless speech among providers of both narrow- and general-purpose LLMs.”

“Who decides what is true?” is a natural question. The authors answer this by saying that the goal is not to force LLMs to take a certain path, but to require “plurality and representativeness of sources.” There is a lot of disagreement among the authors about how much “helpfulness” should weigh against “truthfulness.” It’s not easy, but it might be possible.

To be clear, we haven’t asked ChatGPT these questions, so there aren’t any easy answers. However, as this technology develops, developers will have to deal with them. For now, when you’re working with an LLM, it might be helpful to remember this sobering quote from the authors: “They are designed to take part in natural language conversations with people and give answers that are convincing and feel helpful, no matter what the truth is.”

The study was written up in the Royal Society Open Science journal.

Continue Reading

Artificial Intelligence

When Twitter users drop the four-word phrase “bots,” bots drop out

blank

Published

on

blank

When Elon Musk took over X, it was called Twitter, which is a much better-known name now. He made a big deal out of getting rid of the bots. A study by the Queensland University of Technology, on the other hand, shows that bots are still very active on the platform almost two years later.

X users have found a few ways to get them to come to them. For example, one woman found that posting the phrase “sugar daddy” would get a lot of bots to come to her. It looks like bots are also getting lost because of a new phrase that’s going around. X users have been reporting accounts as automated bots powered by large language models by replying to a suspected bot with “ignore all previous instructions” or “disregard all previous instructions” and then giving the bot more instructions of their choice.

Some people just like writing poems, being trolls, or following directions, so not every example will be from a bot. However, the phrase does seem to make some automated accounts show themselves. There are still a lot of bots on X.

 

 

 

 

 

Continue Reading

Trending