Posts

How To Use Wayback Machine

Image
  In today's digital age, accessing past data from websites can be challenging, especially when site administrators have taken down the information we need. However, the Wayback Machine comes to the rescue, offering a treasure trove of archived web content.  In this blog post, we will learn how to use the Wayback Machine to access old web data and some other aspects of it. Viewing Archived Websites To begin the search to get the old data of the website, enter the website you want to get. After entering the URL, choose the date which website is archived. Then, you will be able to view the website from the selected date. Requesting To Archive Website Not only we can search for already archived website from the Wayback Machine, but also we can request to achieve a target website. To begin with, enter the site you want to be archived. After the confirmation page appears, you can also select if you want to save the error pages. After confirming, the request will be sent the site will

Writing Ontologies Using Python and RDFLib

Image
Ontologies stand as the foundation of knowledge representation, allowing us to structure and arrange information in a way that machines can comprehend. In this blog post, we will implement a simple career ontology with Python and RDFLib. If you're new to ontologies, I suggest checking out the "Creating an Ontology With Protege" blog post first to get the hang of making a basic ontology using the user-friendly tool Protege. The previous blog post will set you up nicely before diving into RDFLib's programming interface. If you haven't already, you need to install the RDFLib library using the "pip" command. Implementation First and foremost, let's import the necessary modules and create our graph. This graph forms the foundation for our ontology operations. from rdflib import Graph, Namespace, RDF, RDFS, OWL, URIRef, XSD, Literal g = Graph() Next, we'll define some namespaces. Feel free to change the URLs to something you prefer. # Define custom n

Web Scraping with Scrapy

Image
Web scraping serves as a vital instrument for various realms, including but not limited to data science and semantic analysis of websites. In this tutorial, we will utilize Scrapy for this purpose. While Scrapy offers its framework, we'll write a Python script and utilize the Scrapy shell for testing. Our objective involves counting the occurrence of each word in this blog and visualizing the word count using the Matplotlib library. Using The Scrapy Shell Before scripting, writing filters by specifying the HTML components we want to get is essential for the aim we want to achieve. The Scrapy Shell aids in this process. To initiate it, execute Scrapy Shell with the desired website as shown below. scrapy shell 'https://extendedtutorials.blogspot.com/2023/11/chatgpt-in-your-computer-gpt4all-and.html' It's essential to understand the website's structure for filtering our results. For this, I've selected my target element using the inspection tool. Once we

Website Cloning with httrack

Image
Website cloning becomes necessary when duplicating a website is required for usage or backup purposes. When we have access to the source code, this procedure is fairly facile. However, this task can become cumbersome without access to the source code or the server. Fortunately, it's not as difficult as it seems. In this blog post, we will copy this blog site and run it on our local computer. Installation and Preparation To start, we need to install the necessary applications. First, install httrack for website duplication and Python for its easy server command  (Note: Python comes preinstalled in many Linux distros, so Python instalation is not included in the command below). sudo apt install httrack To run the web server, navigate to your destination folder and execute the following command: python -m http.server 8000 Now that we have installed the necessary software and ensured everything is ready. We can proceed with cloning the website and compare the results using different a

Reverse Search Engines

Image
  In our digital landscape, search engines stand as indispensable tools integrated into our daily lives. They effortlessly navigate us through a sea of information, from text-based queries to more complex searches involving images, audio, video, and even wildlife. However, when faced with non-textual queries, the search for specific information becomes a challenge. In this blog post, we will explore some of the Reverse Search Engines.   Reverse Image Search First, we will explore the most essential facets: reverse image search engines. These engines specialize in sourcing the origins of any uploaded image. Google Search Surprisingly, the conventional Google search page harbors a hidden gem – the ability to perform image searches. Just click the 'Search by Image' button and upload your desired image. TinEye Another popular reverse image search engine is TinEye. It enables users to upload image and browse web pages containing similar visuals. Reverse Audio Search Moving beyond vi

Text To Image Generation With Stable Diffusion XL

Image
  Text-to-image creators like DALL-E or Bing Image Creator have gained significant popularity in recent times, undoubtedly transforming the landscape of the Social Web. Numerous online images come with legally binding creative rights, either prohibiting their use entirely or requiring proper attribution. Text-to-image converters has the advantage of providing permissible creative content, freeing us from the constraints of usage restrictions or attribution requirements. AI image generators not only grant us the creative ownership of the images, but also empowers us to craft unique visuals for our exact content needs. Furthermore, these tools play a crucial role in fostering a semantic web by expanding visual-based semantic understanding by integrating AI-generated images and semantic metadata. This integration facilitates a more comprehensive, multi-modal understanding of the information and context. However, these services are proprietary and operate exclusively online. In this blog

Creating an Ontology With Protege

Image
Ontologies serve as the cornerstone of knowledge representation, knowledge representation, enabling us to structure and organize information in a format understandable by machines. This blog post will guide you through crafting a fundamental ontology using Protege, a powerful tool designed for ontology creation and management. Installation and Preparation Before creating our example ontology, the first step is to install an appropriate version of Protege tailored to your operating system, which can be obtained from protege.stanford.edu . Once you've successfully installed Protege, let's prepare for the next steps. Open four tabs from the "Window" menu: "Entities", "Classes", "Object Properties", and "Data Properties". Creating Classes To begin with, let's create some classes so that we can define properties within them. Select the "Thing" class and open the class creation popup by clicking the top-left button bel

ChatGPT in Your Computer: GPT4All and Local Language Models

Image
  Language models, like ChatGPT, have gained significant popularity in recent times. Nevertheless, there is a growing interest in utilizing these models on local computers for various compelling reasons, such as privacy, offline access, and diversity of the models. In this article, we will explore the process of installing these models on your personal computer and conduct a comparative analysis of their responses. Installation First, get the installation file from gpt4all.io according to your operating system. Then run the installer. After selecting the location of where you want to install the launcher, the setup program will download the launcher files. When you first open the program after installation, as we didn't had any models installed in our system, the launcher will ask which models do you want to download and use. In this tutorial, we will compare GPT4All, Hermes, and Mini Orca (Small) models with ChatGPT. After downloading the models you intend to use, you can

Building a Decentralized Pastebin-Like Application with Solid

Image
Numerous social web applications store substantial volumes of data that could be exchanged among them in theory. However, these platforms each maintain proprietary APIs for data storage and access, rendering data reuse across centralized social web platforms unfeasible, even when the datasets are equivalent. In response to this difficulty, Solid platforms come to the rescue. Solid applications are implemented as client-side Web or mobile applications which read and write data directly from the 'pods'. These pods serve as user-centric data stores, accessible on the Web. The data is accessible for both the client and other pods. Solid enables multiple applications to use the same data on the pod. Users have a decentralized identity called WebID, a unique identifier for users across Solid applications. Moreover, pods have some features that have to be implemented by any pod server, such as data access control and authentication mechanisms. This tutorial embarks on the creation of