https://adgully.me/post/5954/llms-increasingly-used-to-create-chatbots-snowflake-study

LLMs increasingly used to create chatbots: Snowflake study

Dubai: Large language models (LLMs) are increasingly being used to create chatbots, according to Data Cloud company Snowflake. As generative AI continues to revolutionize the industry, chatbots have grown from being approximately 18 percent of the total LLM apps available, to now encompassing 46 percent as of May 2023 — and that metric is only climbing. In addition, after surveying Streamlit’s developer community, it was found that nearly 65 percent of respondents noted that their LLM projects were for work purposes, signaling a shift in the importance of harnessing generative AI to improve workforce productivity, efficiency, and insights.These results are based on usage data from more than 9,000 Snowflake customers, and summarized in Snowflake’s new "Data Trends 2024" report. The report focuses on how global enterprise business and technology leaders are leveraging resources such as AI to build their data foundation and transform future business operations. The new data shows a shift from LLM applications with text-based input (2023: 82%, 2024: 54%) to chatbots with iterative text input, offering the ability to have a natural conversation."Conversational apps are on the rise, because that’s the way humans are programmed to interact. And now it is even easier to interact conversationally with an application,” explains Jennifer Belissent, Principal Data Strategist at Snowflake. “‘We expect to see this trend continue as it becomes easier to build and deploy conversational LLM applications, particularly knowing that the underlying data remains well governed and protected. With that peace of mind, these new interactive and highly versatile chatbots will meet both business needs and user expectations."Over 33,000 LLM Applications in Nine MonthsThe report also shows that 20,076 developers from Snowflake's Streamlit community of developers have built over 33,143 LLM apps in the past nine months. When it comes to developing AI projects, Python is the programming language of choice due to its ease of use, active community of developers, and vast ecosystem of libraries and frameworks. In Snowpark, which enables developers to build apps quickly and cost-effectively, the use of Python grew significantly faster than that of Java and Scala (in the past year)— Python grew by 571 percent, Scala by 387 percent, and Java by 131 percent. With Python, developers can work faster, accelerating prototyping and experimentation—and therefore overall learnings as developer teams make early forays into cutting-edge AI projects.In terms of where application development is taking place, the trend is towards programming LLM applications directly on the platform on which the data is also managed. This is indicated by a 311 percent increase in Snowflake Native Apps – which enables the development of apps directly on Snowflake’s platform – between July 2023 and January 2024. Developing applications on a single data platform eliminates the need to export data copies to third-party technologies, helping develop and deploy applications faster, while reducing operational maintenance costs.Data Governance in Companies is Growing in ImportanceWith the adoption of AI, companies are increasing analysis and processing of their unstructured data. This is enabling companies to discover untapped data sources, making a modern approach to data governance more crucial than ever to protect sensitive and private data. The report found that enterprises have increased the processing of unstructured data by 123 percent in the past year. IDC estimates that up to 90 percent of the world's data is unstructured video, images, and documents. Clean data gives language models a head start, so unlocking this untapped 90 percent opens up a number of business benefits.“Data governance is not about locking down data, but ultimately about unlocking the value of data,” said Belissent. “We break governance into three pillars: knowing data, securing data and using data to deliver that value. Our customers are using new features to tag and classify data so that the appropriate access and usage policies can be applied. The use of all data governance functions has increased by 70 to 100 percent. As a result, the number of queries of protected objects has increased by 142 percent. When the data is protected, it can be used securely. That delivers peace of mind.”"Taken individually, each of these trends is a single data point that shows how organizations across the globe are dealing with different challenges. When considered together, they tell a larger story about how CIOs, CTOs, and CDOs are modernizing their organizations, tackling AI experiments, and solving data problems — all necessary steps to take advantage of the opportunities presented by advanced AI," says Belissent. "The important thing to understand is that the era of generative AI does not require a fundamental change in data strategy. It does, however, require accelerated execution of that strategy. It requires breaking down data silos even faster and opening up access to data sources, wherever they may be in the company or across a broader data ecosystem."
https://adgully.me/post/5383/haltiaai-pioneers-research-into-real-time-knowledge-capturewithllms

Haltia.AI pioneers research into real-time knowledge capture with LLMs

In a significant advancement for AI technology, Haltia.AI, a dynamic AI startup based in the UAE, has published a pioneering research paper titled "Prompt-Time Symbolic Knowledge Capture with Large Language Models" on arXiv.org . This achievement distinguishes Haltia.AI as the only fully private entity in the UAE to advance the field of AI through published research, an endeavor typically reserved for larger, more established corporations.In this latest research endeavor, Haltia.AI's team sought to push the boundaries of Large Language Models (LLMs), vital tools in transforming human-machine interactions. Despite their proficiency in conversation, LLMs have shown limitations in learning from user-provided data. This study addresses the LLMs' challenge in assimilating knowledge beyond their training, particularly in capturing aspects of users’ personal lives and interactions.To address this, the team, led by Dr. Tolga Çöplü and comprising Arto Bendiken, Andrii Skomorokhov, Eduard Bateiko, Stephen Cobb, and Joshua J. Bouw, developed three fundamental methods to enhance LLMs' ability to capture symbolic knowledge from user inputs. This approach aims to pave the way for more sophisticated, adaptive, and personalized AI applications, driving AI systems that can engage in dialogue and learn in a manner more congruent with human interactions.The research introduces innovative methods for equipping LLMs with the ability to directly capture knowledge from user prompts. It thoroughly explores zero-shot prompting, few-shot prompting, and fine-tuning methodologies, assessing their efficacy in knowledge assimilation, a feature previously lacking in LLM applications.The paper delves into the generation of prompt-to-triple (P2T) knowledge structures, examining methods to extract and structure user-provided information. This advancement promises to create more adaptive, personalized user experiences and significantly contributes to the AI and machine learning fields. The team's focus on knowledge graphs is crucial due to their clear structures and capacity for factual reasoning.Reflecting on the research's impact, lead author Dr. Çöplü stated, "Our research marks a new chapter in AI's interaction and learning capabilities. By concentrating on prompt-driven symbolic knowledge capture, we are setting the stage for AIs that are not only conversational but also truly understand and learn from human input. This publication exemplifies our dedication to pioneering AI research and underscores our unique position as an innovator in the UAE's tech ecosystem."This publication represents a significant stride for Haltia.AI in advancing AI technology. It showcases the startup's capacity to contribute valuable insights to AI research, rivaling global tech giants, and sets the stage for future innovations in personal AIs and other real-world AI applications.The full research paper is accessible on arXiv.org, with the accompanying code and datasets available on GitHub . This open-source approach highlights Haltia.AI's commitment to collaborative innovation and the democratization of AI research.
https://adgully.me/post/4850/yango-unveils-the-future-of-ai-and-llms

Yango unveils the future of AI and LLMs

Yango, the global technology company that recently launched Yasmina, the human-like AI assistant, has offered insights into the future of artificial intelligence, AI assistants, and large language models (LLMs) for the year 2024 and beyond.Samer Mohamad, Yasmina Regional Director for MENA at Yango, said: “While Artificial Intelligence is a source of both huge excitement and apprehension, it is now time to turn our attention to how it will play out in the future. Based on our experience with creating a human-like AI assistant, we understand that LLMs and AI, in general, have a long way to go and can identify a number of upcoming trends.”Yango's predictions for 2024Smarter and easier to use:Smart assistants are poised to become even more intelligent and human-like by integrating with LLMs. Developers will focus on merging assistants with more advanced LLMs, such as ChatGPT-3.5 and 4, to enhance their context-awareness, creativity, and emotional intelligence. The result will be assistants capable of spontaneous storytelling and accurate responses to context-dependent queries, transforming them into ultimate AI companions.More culturally customized:AI and smart assistants will be increasingly tailored to diverse cultural and linguistic nuances. This includes support for multiple regional languages and dialects, as well as a deep understanding of local customs and social norms. Yango envisions assistants responding consistently in the same dialect as the speaker, ensuring a seamless and personalized experience.Universal translators:LLMs are already proficient at translating text, and this capability is expected to improve further. Yango anticipates a future where LLMs or LLM-powered software and devices will possess mastery over all languages, offering seamless translation for any language pair.Smarter assistants for smarter homes:Smart assistants will play a pivotal role in advancing smart home development. Yango envisions a scenario where users can delegate tasks to their AI assistants, seeking guidance on configuring smart home scenarios and discovering efficient ways to use connected devices. Whether it's recommending the right appliances or providing advice on optimizing existing devices, the AI assistant becomes an indispensable resource for creating intelligent and efficient smart homes.Yango invites tech enthusiasts, developers, and AI aficionados to stay tuned for these exciting developments in the coming year as the company continues to push the boundaries of AI innovation with Yasmina, the human-like AI assistant.
https://adgully.me/post/4434/ibm-unveils-watsonxgovernance-to-help-businesses-governments

IBM unveils watsonx.governance to help businesses & governments

United Arab Emirates: IBM today announced that watsonx.governance will be generally available in early December to help businesses shine a light on AI models and eliminate the mystery around the data going in, and the answers coming out.While generative AI, powered by Large Language Models (LLM) or Foundation Models, offers many use cases for businesses, it also poses new risks and complexities, including training data scraped from corners of the internet that cannot be validated as fair and accurate, all the way to a lack of explainable outputs. Watsonx.governance provides organizations with the toolkit they need to manage risk, embrace transparency, and anticipate compliance with future AI-focused regulation.As businesses today are looking to innovate with AI, deploying a mix of LLMs from tech providers and open sources communities, watsonx enables them to manage, monitor and govern models from wherever they choose.  “Company boards and CEOs are looking to reap the rewards from today’s more powerful AI models, but the risks due to a lack of transparency and inability to govern these models have been holding them back,” said Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. “Watsonx.governance is a one-stop-shop for businesses that are struggling to deploy and manage both LLM and ML models, giving businesses the tools they need to automate AI governance processes, monitor their models, and take corrective action, all with increased visibility. Its ability to translate regulations into enforceable policies will only become more essential for enterprises as new AI regulation takes hold worldwide.”IBM Consulting has also expanded their strategic expertise to help clients scale responsible AI with both automated model governance and organizational governance encompassing people, process and technology from IBM and strategic partners. IBM consultants have deep skills in establishing AI ethics boards, organizational culture and accountability, training, regulatory and risk management, and mitigating cybersecurity threats, all using human-centric design.Watsonx.governance is one of three software products in the IBM watsonx AI and data platform, along with a set of AI assistants, designed to help enterprises scale and accelerate the impact of AI. The platform includes the watsonx.ai next-generation enterprise studio for AI builders and the watsonx.data open, hybrid, and governed data store. The company also recently announced intellectual property protection for its for IBM-developed watsonx models.
https://adgully.me/post/4224/proofpoint-signs-definitive-agreement-to-acquire-tessian

Proofpoint signs definitive agreement to acquire Tessian

Proofpoint Inc., a leading cybersecurity and compliance company, today announced it has entered into a definitive agreement to acquire Tessian, a leader in the use of advanced AI to automatically detect and guard against both accidental data loss and evolving email threats. The acquisition is expected to close in late 2023 to early 2024, subject to customary closing conditions, including any required regulatory approvals.Proofpoint protects organizations against social engineering attacks by applying award-winning AI and large language models (LLMs) to block threats and provide real-time threat insights. AI-based detection has proven to be notably effective in identifying threats targeting people, such as email fraud and supplier-based attacks, and preventing data loss due to negligent or malicious actions. With the acquisition of Tessian, Proofpoint will enhance its threat and information protection platforms by adding powerful layers of AI-powered defense that address risky user behaviors, including misdirected email and data exfiltration.Misdirected emails (sending emails to the wrong recipient) and mis-attached files continue to be a leading cause of compliance violations and accidental data loss for organizations according to Ponemon research: in 2022 alone, 65% of all data loss incidents occurred via email, and nearly two-thirds of organizations experienced data loss or exfiltration due to an employee mistake on email. As a result, it takes security teams 48 hours, on average, to detect and remediate a data loss and exfiltration incident caused by employee negligence.“Far too often, human errors with email lead to organizations putting their own and their customer’s data at risk, breaching industry and data protection regulations and losing mission-critical intellectual property,” said Darren Lee, executive vice president and general manager, Security Products and Services Group, Proofpoint. “By combining Proofpoint’s best-in-industry data, detection stack, and efficacy with Tessian’s advanced behavioral and dynamic detection platform, we can provide our customers with world-class defense and instant protection. Proofpoint channel partners can quickly bring value to their customers with these new, easy-to-deploy solutions that integrate natively with Microsoft 365 and Google Workspace.”“Our long-standing vision to secure the human layer has been the driving force behind our innovative platform offering inbound email security, as well as outbound data loss prevention,” said Tim Sadler, chief executive officer, Tessian. “By joining forces with Proofpoint, we can empower organizations to further improve their email security posture, reduce the risk of data breaches, and lighten the workload on their security teams.”More than nine in 10 organizations have dealt with a data breach caused by an end-user error on email. Using behavioral understanding and machine learning, Tessian's AI-powered email security platform will enhance Proofpoint’s email data loss prevention (DLP) offering by addressing accidental data loss and malicious insiders through its seamless Microsoft 365 and Google Workspace deployment. Tessian solutions include:Tessian Guardian: Protects sensitive data, helps customers meet regulatory compliance and confidentiality agreements, and eliminates the risk of reputational damage by preventing misdirected emails and mis-attached files.Tessian Enforcer: Automatically protects against data exfiltration and safeguards intellectual property without predefined rules or deny-lists.Tessian Defender: Context-aware, AI-based email defense that detects and prevents the full spectrum of email attacks, while providing end users with in-the-moment contextual warning banners to help them decide whether an email is safe.Tessian’s solutions are expected to become part of Proofpoint’s offering upon the closing of the acquisition.