The developers offered the lifelong farmer $60,000 an acre for the land, meaning the total for the whole 261 acres was more $15 million – a life-changing amount of money.
But despite the enormous amount of money that was being offered to him, Raudabaugh said no.
The farmer opened up about why he turned down the multi-million dollar offer on the farm from the developers. »It was my life, » he said. « I told [the data center developers] no, I was not interested in destroying my farms. »
He went on to explain that he was less interested in the ‘economic’ side of things, saying: « That was really the bottom line. It wasn’t so much the economic end of it. I just didn’t want to see these two farms destroyed. »
He added: « Only the land that is preserved here is going to be here. The rest, every square inch, is going to get built on. »
Not only did Raudabaugh not sell to the developers, but he also went out of his way to ensure the land stayed protected.
He did this by selling the rights to develop on his land to the Lancaster Farmland Trust.
This is a nonprofit organization which aims to preserve land in Cumberland County.The trust paid Raudabaugh $2 million for the land, a fraction of the offer he received from the other developers.
However, the crucial factor in selling to the trust was a guarantee that the land will not be used for another purpose.
The land can still be sold, but only if the purchaser is going to use it for agriculture.
Jeff Swinehart, a representative of the Lancaster Farmland Trust, told FOX 43 that Raudabaugh is not alone in his desire to preserve his farmland for its current use.
He said: « We see from many farm families a desire to ensure that farm remains a farm forever and that it contributes to the local community. »
Raudabaugh added: « Friends of mine here are very happy with what I’ve done because they know that the building within their eye view here will be beautiful for quite a while. »
People are increasingly turning to artificial intelligence for day-to-day tasks, including creating passwords, but what feels like a clever shortcut could actually put your security at risk.
Most of us know the frustration. You’re signing up for a new service or updating an old login, staring at a blank password field while your mind goes blank.
The requirements soon pile up: a mix of upper and lowercase letters, a minimum character count, numbers, special symbol… and, of course, it has to be completely unique.
With dozens of accounts spanning banking, shopping, streaming, and social media, along with constant warnings from cybersecurity experts about the dangers of reusing credentials, coming up with a fresh, complex password every time can feel almost impossible.
So it’s perhaps no surprise that some users are outsourcing the task to AI.
However, new research suggests people are turning to artificial intelligence chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini to generate ‘strong’ passwords for them.AI systems are trained on vast amounts of datasets composed of public, openly accessible data, and with that they generate what appears to be complex sequence of characters for your password. Security experts warn that this approach may be misguided and could be putting your personal information at risk.
The research, from AI cybersecurity firm Irregular and verified by Sky News, found that all three major models – ChatGPT, Claude, and Gemini – generated ‘highly predictable passwords’.
Why you shouldn’t use AI to generate passwords
“You should definitely not do that,” Irregular co-founder Dan Lahav told Sky News.
“And if you’ve done that, you should change your password immediately. And we don’t think it’s known enough that this is a problem.”
One of the reasons behind the warning is that predictable patterns threaten good cybersecurity because it means cybercriminals can use automated tools to help guess passwords.As a result of large language models (LLMs) generating results based on patterns in their training data, as opposed to generating passwords randomly, they are not creating strong password options. Instead, it just looks like a strong password, but one that could be highly predictable.
While AI can generate those passwords that look complicated, they should not be used as password managers.
The risk of AI-generated passwords
Shockingly, many AI-generated passwords are visible to the naked eye, whereas others just need mathematical analysis to reveal just how unsafe they are.
When Sky used Claude to check this research, the first password it produced was K9#mPx@4vLp2Qn8R. ChatGPT and Gemini were ‘slightly less regular’ with the results they gave, but the results were still ‘repeated passwords’.
These passwords also passed tests using online password checking tools, with the results claiming that the passwords were ‘extremely strong’.
« Our best assessment is that currently, if you’re using LLMs to generate your passwords, even old computers can crack them in a relatively short amount of time, » Lahav warned.
The experts said you should pick a long phrase you’ll remember, and avoid using AI to find one.
A Google spokesperson told Sky: « LLMs are not built for the purpose of generating new passwords, unlike tools like Google Password Manager, which creates and stores passwords safely.
« We also continue to encourage users to move away from passwords and adopt passkeys, which are easier and safer to use. »