<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Intelligence Archives - McCarthy Lebit - A Cleveland/Ohio Law Firm</title>
	<atom:link href="https://mccarthylebit.com/tag/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://mccarthylebit.com/tag/artificial-intelligence/</link>
	<description>Expect More. Get More.</description>
	<lastBuildDate>Wed, 11 Feb 2026 20:52:29 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Businesses &#038; Artificial Intelligence: Avoiding Hidden Risks</title>
		<link>https://mccarthylebit.com/businesses-artificial-intelligence-avoiding-hidden-risks/</link>
		
		<dc:creator><![CDATA[Alex M. Friedman]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 14:00:00 +0000</pubDate>
				<category><![CDATA[Business & Corporate]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Business]]></category>
		<guid isPermaLink="false">https://mccarthylebit.com/?p=26796</guid>

					<description><![CDATA[<p>Artificial intelligence (“AI”) has moved far beyond a behind-the-scenes efficiency tool. Today, it touches marketing, customer service, finance, hiring, pricing, compliance, and strategic decision-making. As a result, AI is no longer just something managed by IT; it is now a core business risk that affects legal compliance, intellectual property, data protection, and corporate governance. Many [&#8230;]</p>
<p>The post <a href="https://mccarthylebit.com/businesses-artificial-intelligence-avoiding-hidden-risks/">Businesses &amp; Artificial Intelligence: Avoiding Hidden Risks</a> appeared first on <a href="https://mccarthylebit.com">McCarthy Lebit - A Cleveland/Ohio Law Firm</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Artificial intelligence (“AI”) has moved far beyond a behind-the-scenes efficiency tool. Today, it touches marketing, customer service, finance, hiring, pricing, compliance, and strategic decision-making. As a result, AI is no longer just something managed by IT; it is now a core business risk that affects legal compliance, intellectual property, data protection, and corporate governance.</p>



<p>Many companies are already using AI without fully realizing it. It is embedded in software platforms, marketing tools, HR systems, analytics programs, and customer-facing applications. Even when a business does not build the AI itself, it remains responsible for how that AI operates, what data it uses, and what outputs it produces. If an AI system makes a mistake, discloses confidential data, or produces misleading or discriminatory results, the legal and financial consequences fall on the company.</p>



<h2 class="wp-block-heading" id="h-ai-amp-data-privacy-are-now-inseparable">AI &amp; Data Privacy Are Now Inseparable</h2>



<p>Modern AI systems rely on large volumes of data, much of which is personal, financial, or proprietary. That means existing privacy and data-protection laws already apply to AI, even where no AI-specific statute exists. Consent, notice, purpose limitation, data minimization, and security obligations all matter just as much when data is processed by an algorithm as when it is processed by a human.</p>



<p>Regulators increasingly view AI as an extension of data processing, not a separate category. When personal data is fed into an AI system, whether for training, analysis, decision-making, or otherwise, privacy obligations follow it. Companies that do not understand how data moves into and through their AI tools are exposed to compliance risk, whether they realize it or not.</p>



<h2 class="wp-block-heading" id="h-ai-governance-is-becoming-a-business-expectation">AI Governance Is Becoming a Business Expectation</h2>



<p>Across industries, regulators and counterparties are beginning to expect companies to know when and how AI is used in their operations. That includes having internal policies, employee guidance, vendor controls, and documentation that demonstrate responsible use.</p>



<p>This is not just about compliance. It is also about risk management. Without clear rules, employees may upload confidential information into public AI tools, rely on unverified outputs for business decisions, or use AI in ways that conflict with company values or legal obligations. Governance provides guardrails so innovation does not quietly turn into liability.</p>



<h2 class="wp-block-heading" id="h-ai-raises-intellectual-property-amp-contract-issues">AI Raises Intellectual Property &amp; Contract Issues</h2>



<p>AI systems can generate reports, marketing materials, designs, code, and other business content, but ownership of that generated content is not always straightforward. Some platforms impose limits on how their outputs can be used. Others rely on training data that may include copyrighted or proprietary material, which can create infringement risk.</p>



<p>Businesses that rely heavily on AI-generated content need to understand what rights they actually have, what their vendors are promising, and where potential risk exposure exists. These issues belong in contracts, licensing terms, and internal usage policies, not just in the IT department.</p>



<h2 class="wp-block-heading" id="h-errors-hallucinations-amp-accountability">Errors, Hallucinations, &amp; Accountability</h2>



<p>AI systems are powerful, but they are not reliable in the way traditional software is. They can generate incorrect or fabricated information that appears convincing. If those outputs are used in customer communications, advertising, financial reporting, or operational decisions, the company bears the risk. There is no legal concept of “the AI made me do it.” The business remains responsible for what it publishes, relies on, or communicates, even when it was created by an AI tool.</p>



<h2 class="wp-block-heading" id="h-using-ai-responsibly-is-now-part-of-running-a-business">Using AI Responsibly Is Now Part of Running a Business</h2>



<p>AI is here to stay. The companies that succeed with AI are not the ones avoiding it, they are the ones using it deliberately, with clear rules, strong data protections, and realistic expectations about what it can and cannot do. The challenge for businesses is learning how to use it in a way that supports growth while protecting the organization from legal, regulatory, and reputational harm. Balancing innovation with accountability in a rapidly evolving environment is key to success in the world of AI.</p>



<h2 class="wp-block-heading" id="h-how-we-can-help">How We Can Help</h2>



<p>If you have questions about how artificial intelligence is being used in your business, whether your current practices create risk, or how to put appropriate policies and contracts in place, now is the time to address them. AI is moving faster than the law, but regulators are paying close attention, and the law may soon be able to catch up. Working with counsel to evaluate and structure your AI use can help you stay ahead of problems rather than reacting to them after they arise.</p>



<p>For more information or to seek counsel from our <a href="https://mccarthylebit.com/practices/business-corporate/">Business &amp; Corporate</a> practice group, please reach out to <a href="https://mccarthylebit.com/contact/">request a consultation</a> or call us at 216-696-1422.</p>
<p>The post <a href="https://mccarthylebit.com/businesses-artificial-intelligence-avoiding-hidden-risks/">Businesses &amp; Artificial Intelligence: Avoiding Hidden Risks</a> appeared first on <a href="https://mccarthylebit.com">McCarthy Lebit - A Cleveland/Ohio Law Firm</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial Intelligence in Employment Processes</title>
		<link>https://mccarthylebit.com/artificial-intelligence-in-employment-processes/</link>
		
		<dc:creator><![CDATA[Frank T. George]]></dc:creator>
		<pubDate>Thu, 09 Jun 2022 17:24:54 +0000</pubDate>
				<category><![CDATA[Employment Law]]></category>
		<category><![CDATA[Americans with Disabilities Act]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Employment]]></category>
		<category><![CDATA[Equal Employment Opportunity Commission]]></category>
		<guid isPermaLink="false">http://9041b3eca6.nxcli.io/?p=23020</guid>

					<description><![CDATA[<p>Employers are increasingly turning to artificial intelligence—or software designed to simulate human intelligence—to help recruit and hire new employees. Some companies have already begun implementing this technology by, for example, using resume-screening software to verify applicants’ credentials and by using artificially intelligent face and voice monitoring systems to track candidates’ body language and tone during [&#8230;]</p>
<p>The post <a href="https://mccarthylebit.com/artificial-intelligence-in-employment-processes/">Artificial Intelligence in Employment Processes</a> appeared first on <a href="https://mccarthylebit.com">McCarthy Lebit - A Cleveland/Ohio Law Firm</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Employers are increasingly turning to artificial intelligence—or software designed to simulate human intelligence—to help recruit and hire new employees. Some companies have already begun implementing this technology by, for example, using resume-screening software to verify applicants’ credentials and by using artificially intelligent face and voice monitoring systems to track candidates’ body language and tone during interviews.</p>
<p>By removing humans from part of the hiring process, employers often hope that they are also removing human prejudices and discrimination from their decision-making as well. That may not always be the case. Just this month, the Equal Employment Opportunity Commission (“EEOC”) issued guidance, warning that the use of artificial intelligence in hiring decisions “may disadvantage job applicants and employees with disabilities.”</p>
<p>As the EEOC explained, the Americans with Disabilities Act (“ADA”) prohibits employers from discriminating against employees on the basis of disability. It also generally requires employers to provide reasonable accommodations to permit an applicant with a disability to apply for a job.</p>
<p>The EEOC warned that overreliance on technology during the hiring process may result in unintended violations of the ADA. For example, if an employer uses an automated software that screens out all applicants who have had significant gaps in their employment history, the software could be inadvertently excluding employees who stopped working to undergo treatment for a disability. Similarly, an employer that uses video interviewing software to analyze applicants’ speech patterns may unintentionally place applicants with speech impediments at a disadvantage.</p>
<p>Notably, the EEOC further warned that employers may be liable for discriminatory results caused by automated software, even if the software was designed, implemented, and administered by an outside vendor.</p>
<p>The EEOC therefore provided guidance to employers who intend to use automated software when making employment decisions. According to the EEOC, employers should:</p>
<ul>
<li>Determine whether their automated software was designed with individuals with disabilities in mind, and if an employee is expected to interface with the software, make the interface accessible to individuals with disabilities or present alternative interfacing formats for those with disabilities;</li>
<li>Provide all applicants information about the automated software—including information about the traits/characteristics that the software assesses—and provide all applicants with instructions for requesting reasonable accommodations;</li>
<li>Ensure that the automated software only screens/assesses candidates based on the abilities and qualifications that are truly necessary for the job.</li>
</ul>
<p>The EEOC’s guidance regarding the ADA comes just a few months after it launched an initiative in October 2021 designed to “ensure that artificial intelligence and&#8230;and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws.” Employers can therefore expect that the EEOC will publish further guidance regarding artificial intelligence and other federal employment laws.</p>
<p>For more information about the EEOC&#8217;s guidance or to seek counsel from our <a href="https://mccarthylebit.com/practices/employment/">employment law</a> group, please <a href="https://mccarthylebit.com/contact/" target="_blank" rel="noreferrer noopener" data-type="URL" data-id="https://mccarthylebit.com/contact/">reach out to request a consultation</a> or call us at 216-696-1422.</p>
<p>The post <a href="https://mccarthylebit.com/artificial-intelligence-in-employment-processes/">Artificial Intelligence in Employment Processes</a> appeared first on <a href="https://mccarthylebit.com">McCarthy Lebit - A Cleveland/Ohio Law Firm</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
